Platform Engineering is a cross-functional engineering division comprised of a set of domain experts to accelerate organizational software engineering while reducing risk. A mature Platform Engineering unit is able to accomplish this through soft power project via their expertise in the areas of producing software, track record for success, and resolving organizational wide issues. However the path towards a mature Platform Engineering unit requires direct support from organizational leadership to negotiate healthy boundaries and properly utilization of the unit.
Clients of a Platform Engineering division are Software Engineers within an organization. Platform Engineering provides a deep well of expertise for other Software Engineers to interact with. Your customers should not directly interact with your Platform Engineering group.
Platform Engineering can be broken up into the following pillars:
Here is a brief list of what Platform Engineering is not:
A Platform Engineering division is a sociotechnical force multiplier within a Software Engineer Organization by providing a deep well of excellence on the production and operation of software. It differs from a NOC style approach in that product teams own and operate their particular application suite, potentially on shared infrastructure, with guidance from the Platform Engineering. Successful adoption of the owner + operator philosophy of DevOps is critical to the successful implementation of a Platform Engineering division.
Platform Engineering has been on the rise as a term in the past few years. I have worked with several organizations on establishing new Platform Engineering units, or augmenting existing operational practices to move towards providing what I view as Platform Engineering. Although I stand on the Shoulders of Giants, much of this filters through my experience with what has worked and avoids some pitfalls.
]]>Major changes in 0.69
…0.72
Overall it does not look like too many user level changes. Should be fairly straight forward. Based on the React Native documentation this should be a fairly straight forward process:
npx react-native upgrade
Well…I received a ton of errors along the lines of the following:
warn Package @sentry/react-native contains invalid configuration: "dependency.platforms.ios.sharedLibraries" is not allowed,"dependency.hooks" is not allowed. Please verify it's properly linked using "react-native config" command and contact the package maintainers about this.
warn Package react-native-sqlite-storage contains invalid configuration: "dependency.platforms.ios.project" is not allowed. Please verify it's properly linked using "react-native config" command and contact the package maintainers about this.
info No version passed. Fetching latest...
info Fetching diff between v0.69.4 and v0.71.4...
info Applying diff...
warn Excluding files that exist in the template, but not in your project:
- .flowconfig
- App.js
- __tests__/App-test.js
- ios/mee.xcworkspace/contents.xcworkspacedata
error Excluding files that failed to apply the diff:
- android/app/build.gradle
- android/app/src/main/AndroidManifest.xml
- android/app/src/main/java/com/eschbachgroup/thebachs/newarchitecture/modules/MainApplicationTurboModuleManagerDelegate.java
- android/app/src/main/jni/Android.mk
- android/app/src/main/jni/MainApplicationTurboModuleManagerDelegate.h
- android/app/src/main/jni/MainComponentsRegistry.h
Please make sure to check the actual changes after the upgrade command is finished.
You can find them in our Upgrade Helper web app: https://react-native-community.github.io/upgrade-helper/?from=0.69.4&to=0.71.4
error Automatically applying diff failed. We did our best to automatically upgrade as many files as possible
warn Continuing after failure. Some of the files are upgraded but you will need to deal with conflicts manually
info Installing "react-native@0.71.4" and its peer dependencies...
info Running "git status" to check what changed...
On branch main
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
modified: package.json
modified: yarn.lock
warn Please run "git diff" to review the conflicts and resolve them
warn After resolving conflicts don't forget to run "pod install" inside "ios" directory
info You may find these resources helpful:
• Release notes: https://github.com/facebook/react-native/releases/tag/v0.71.4
• Manual Upgrade Helper: https://react-native-community.github.io/upgrade-helper/?from=0.69.4&to=0.71.4
• Git diff: https://raw.githubusercontent.com/react-native-community/rn-diff-purge/diffs/diffs/0.69.4..0.71.4.diff
error Upgrade failed. Please see the messages above for details.
info Run CLI with --verbose flag for more details.
So running yarn android dev --variant=Debug --appIdSuffix=debug
resulted in a the following error:
Error: Command failed: ./gradlew app:installDebug -PreactNativeDevServerPort=8081
e: /Users/gremlin/wc/mee/apps/mobile/mee/node_modules/react-native-gradle-plugin/src/main/kotlin/com/facebook/react/tasks/BundleHermesCTask.kt: (138, 11): This declaration is experimental and its usage must be marked with '@kotlin.ExperimentalStdlibApi' or '@OptIn(kotlin.ExperimentalStdlibApi::class)'
Seems to indicate something went wrong with react-native-gradle-plugin
. Looking through the updated diff in the log
message none
if it seems to have been applied. At this point I’ll revisit another time.
Perhaps I missed something however the price of thermostats, especially smart thermostats, seems absolutely silly. Really my desire is for a relatively unintelligent node controlling the relays and move the smarts into something like Home Assistant. Probably a bad idea, but I bet I can get below the $60 for an internet connected device.
Hilariously while researching parts I came across a Espressif Board containing a temperature and humidity sensor.
Component | Usage | Price |
---|---|---|
ESP DevKitC (v4) | CPU/MCU | $7.595 ($15.19 for 2) |
4 channel Songle SRD-05VDC-SL-C | HVAC signaling | $7.99 |
Cases
There are many options for embedded storage within Golang and OSX. Since AppWatcher effectively captures a stream of focused applications, time series might be an interesting choice. Giving nakabonne/tstorage a shot!
This will break the app into two parts:
active
for the metrics series
with various things like bundleID
as labels to the data.active
metrics.Using modules: go get -u github.com/nakabonne/tstorage
. Added a new flag to store via tstorage
with a data path.
Using a service style design the component will consume a channel and write a record. From a setup side we need to initial the data store like the follow:
type TStorageEngine struct {
store tstorage.Storage
}
func NewTStorage(ctx context.Context, filePath string) (*TStorageEngine, error) {
storage, err := tstorage.NewStorage(
tstorage.WithTimestampPrecision(tstorage.Milliseconds),
tstorage.WithDataPath(filePath),
)
if err != nil {
return nil, err
}
return &TStorageEngine{store: storage}, nil
}
func (t *TStorageEngine) Close() {
//TODO: Complain somewhere if there is a problem
t.store.Close()
}
Seems to works like a charm. Sadly much of the API does not respect contexts so I will plumb them in up to the point of the calls.
Now for the meat and potatoes:
func (t *TStorageEngine) storeRecord(ctx context.Context, msg *appkit.RunningApplication) error {
labels := []tstorage.Label{
{Name: "bundleIdentifier", Value: msg.BundleIdentifier().Internalize()},
{Name: "bundleURL", Value: msg.BundleURL().FileSystemPath()},
}
return t.store.InsertRows([]tstorage.Row{
{
Metric: "active",
Labels: labels,
DataPoint: tstorage.DataPoint{
Timestamp: time.Now().Unix(),
},
},
})
}
Have not figured out a good value so far. No context support either. Time to recall the data. Theoretically we have no idea what the labels are and we want all of the data. Something like this should work to just prove the query. From an API stand point having a third return value indicating no records or just leaving it nil would have been better in my opinion.
func (t *TStorageEngine) ReplayAll(ctx context.Context) ([]int, error) {
_, err := t.store.Select("active", nil, 0, time.Now().Unix())
if err != nil {
if errors.Is(err, tstorage.ErrNoDataPoints) {
return nil, nil
}
return nil, err
}
return nil, nil
}
No points recorded :-(. Looking at the created test
directory I was running with there is a WAL
with zero bytes. Perhaps this is from the lack of a value? Setting value to 1
still has a WAL
of zero bytes and
no other files. I would hope a WAL
would survive an unclean shutdown. There are only 3 methods: Insert
, Select
,
and Close
on the returned interface.
Moving on to the early close problem I tried setting up handlers for SIGTERM
and SIGINT
via something like this:
processContext, processDone := context.WithCancel(context.Background())
defer processDone()
procSignals := make(chan os.Signal, 10)
go func() {
for {
select {
case sig := <-procSignals:
switch sig {
case unix.SIGINT:
fmt.Printf("SIGINT received. Shutting down.\n")
processDone()
case unix.SIGTERM:
fmt.Printf("SIGTEM received. Shutting down.\n")
processDone()
default:
fmt.Printf("Unknown signal received: %d, ignoring.\n", sig)
}
case <-processContext.Done():
return
}
}
}()
This causes NSRunLoop
to exit immediately, not processing any events of interest. No values are returned from run
and no reference to signals in the documentation
which is interesting. Does the sigaction
handler under the hood mean NSRunLoop
has no input mechanisms? Perhaps
the system uses signals under the hood and Go binds to all? Intercepting all signals does not produce anything.
So I am guessing there is another mechanism related to Mach and the NextStep system related to messaging, with the Golang runtime and the NextStep system fighting over signals. This mystery will need to wait for another time sadly.
]]>Ideally I would like to monitor voltage and current with a low voltage cut-off. Figured it would probably come in two flavors: current sensor, and voltage sensor.
There are two primitive transactions within options: call
and put
. Both are centered around a strike price on an
underlying asset which a premium is paid for. A call
is underwritten to provide stock sold at the strike price when
the underlying asset is sold above the given price within a given time period. A put
is underwritten to purchase
stock at specific strike price when the underlying asset falls below the agreed upon strike price. As you can imagine
these primitive provides a lot of emergent behavior in how one can profit or lose significantly.
My data source is a list of date ordered transactions regarding both stock and transactions. Here are the following questions I am attempting to answer:
Until you add in options, answering the first question is simple. A stock is bought at a price, held until sold, then sold for a price. I generally use FILO model for matching pairs however there are other models such a FIFO which. Tax implications are a question for a future date.
I use cash secured puts
and covered calls
for underwriting. A cash secured put
means actual cash is held in
reserve, unable to be utilized in other trades. A covered call
means the underwritten option is reserved for the
specific call. These create a third state for both cash and stock: reserved by underwite
. Typically, options periods
are scheduled on to expire at the end of a week and several weeks out.
For a given date, a report should contain something like the following:
This website has its origins back in the late 90s. It has seen a lot of technology, including being subject to some not so great experiments of mine over the years (XSLT!). As of right now I know I am using the following tech to generate the sites static content:
Tekton runs things in containers, so I will need to role my own since I doubt I will find one with all of these on the correct version. Would be scared if I did! Let us see what a basic Dockerfile sketch will do for us.
FROM ubuntu:23.04
RUN apt-get update && apt-get -y install php nodejs ruby
RUN php --version
RUN node --version
RUN ruby --version
Ran into a problem with interactive prompts being waited upon for configuring timezones. To resolve that issue add the
following two lines. TZ
sets the preferred timezone for the applications. DEBIAN_FRONTEND=noninteractive
tells
apt
do not prompt for user input.
ENV TZ="UTC"
ENV DEBIAN_FRONTEND=noninteractive
Alright, this gives us a container with reasonable versioning info. Next up what does a build look like when scripted for development?
#!/bin/bash
(cd .cd/builder && docker build --tag website-meschbach-builder:dev .cd/builder)
docker build --tag website-meschbach:dev .
With a docker file like so:
FROM website-meschbach-builder:dev as builder
COPY . /source
WORKDIR /source
RUN ./generate.sh
FROM nginx:latest as final
COPY --from=builder target /usr/share/nginx/html
Turns out I needed the following software installed too:
make
– Allowed for rapid development since a single command would only build new things.npm
– Not installed with nodejs
through apt
I guess.bundle
– Not installed with ruby
:-/. Fixed by install ruby-full
I guess I use bower
somewhere in there. I should really review that at some point :-D Ah, it’s used with revealjs
.
At least it has the decency to tell me to not run it as root >.< . To work around this issue I changed the builder
image to add the following:
RUN mkdir -p /nobody-home && chown -R nobody /nobody-home
ENV HOME /nobody-home
ENV GEM_HOME /nobody-home/.gem
USER nobody
Then in the actual build:
COPY --chown=nobody:nobody . /source
Gems were out of date. So I spun up a container like docker run -v $PWD:/source -it website-meschbach-builder:dev /bin/bash
and ran the following:
bundle update
bundle install
This needed to happen in both my tools directory which installs jekyll
and within the jekyll
directory itself.
Taiga looks promising. I am fighting myself flipping the ai
with ia
so please forgive any
issues with that. Looks like a Kanban system from the screenshots. Has decent reviews.
mvitale1989 is the top recommend helm chart. So I will start with that. Looks like by default there is a lot of persistence. Primarily since the Helm Chart has an embedded copy of Postgres. Disabled via the following for test. Will look at configuration afterwards.
persistence:
enabled: false
postgresql:
persistence:
enabled: false
Welp. This produced an “Oops, something went wrong” on the default service port. Time to figure out what went wrong.
Looks like the console is complaining about missing resources on localhost
:
angular.js:12261 GET http://localhost:8080/api/v1/stats/discover net::ERR_CONNECTION_REFUSED
angular.js:12261 GET http://localhost:8080/api/v1/projects?discover_mode=true&order_by=-total_fans_last_week net::ERR_CONNECTION_REFUSED
angular.js:12261 GET http://localhost:8080/api/v1/projects?discover_mode=true&order_by=-total_activity_last_week net::ERR_CONNECTION_REFUSED
angular.js:12261 GET http://localhost:8080/api/v1/projects?discover_mode=true&is_featured=true net::ERR_CONNECTION_REFUSED
They use Anglar! Cool! I’ve used that before. Probably looking for a knob which sets the canonical address of the
service. Yup! They port-forward
in the TLDR. Looks like taiga.apiserver
needs to be set to that address. Values file now looks like this for testing:
persistence:
enabled: false
postgresql:
persistence:
enabled: false
taiga:
apiserver: "test.taiga.svc.workshop.k8s"
Applying this did not restart the application. I killed the pod. Worked as expected.
App is running however now it would like a user name and password. Unfortunately I have not entered one anywhere yet. So it looks like the author of the chart is using LDAP, which is fine. However I do not have that setup.
Attempting to work around the issue by using the manage.py
for the backend via a shelling into the system resulted in
an error around the is_staff
key when running python manage.py createsuperuser
.
After a bit more Googling I found a forum stating one needs
to go to /admin/
using the credentials admin
+ 123123
to create the user. I can understand why one would use
LDAP instead.
Interestingly the Scrum project splits various areas of work. It will then sum the work. Love they are pushing the the cross functional aspect of it. Not sure if it is correct to extract each one. Probably drives towards better estimates though, pushing values higher than 2.
This is honestly what I am more interested in. It’s a basic Kanban board. Not bad overall. Highly configurable. Epics are not out of the gate but configurable. Overall something I would be interested in exploring more.
From the brief “can I deploy it” view it looks reasonably promising. Probably show it to my better half since she has extensive project management experience for her own usage. For in-depth work on it later.
]]>Pretty easy to setup the first user and get them onboard. Definitely should give the application your home location. There was one funky situation where a login redirection caused an invalid login…something which I do not hope happens again.
I installed the Tasmota integration since this is my current use case. An MQTT broker is actually required, so I will need to deploy a broker also. According to Home Assistant’s website the only supported broker is Mosquitto. Sigh, it is an Eclipse project, hopefully it will not be hard to configure. I guess that makes sense since IBM was involved in creating MQTT.
My Google Fu must be weak today. Best chart I can find again for a fast deployment is k8s@home. Just to get things off the ground I will try this out.
So the Tasmota automatic discovery is deprecated. There is also something funky going on with message delivery for the Tasmota software as the consumer just hangs. To remove Home Assistant as a culprit I went straight to the MQTT broker’s container. The following commands are helpful for interacting with teh device
# to listen to a device command results
mosquitto_sub -v -t 'stat/switch_tree/RESULT'
# to toggle the power state
mosquitto_pub -t cmnd/switch_tree/Power -m 'toggle'
# to query the power state
mosquitto_pub -t cmnd/switch_tree/Power -n
NOTE: the device here is named switch_tree
Leo’s Notes Wiki entry on Home Assistant helped verify I should have Home
Assistant in the correct configuration. Effectively I had to run SetOption19 0
on the Sonoff S31 device and restart
Home Assistant. Not exactly what the difference between discovery modes are, however the device now shows up in the
test environment.
Currently it seems here are the material elements.
SetOption19 0
within the user interfaceTasmota
integrationMQTT
integration to connect to the broker.Matt Wilson (mswis) seems to have had reasonable luck deploying Home Assistant back in October. For Home Assistant itself the material point is trusting the ingress HTTP load balancer. Matt Wilson also choose to hold all persistent data on a single 30GiB volume. Although he is using MetalLB’s shared IP it does not look material to the deployment.
Sadly they appear to be deprecating their efforts due to lack of community contributions.
Regardless a good source of information. Without being knowledgeable of Home Assistant it looks like it can integrate with many data stores based on the configuration.
]]>