Andrew Womeldorf
Software Engineer
#!/usr/bin/env sh # # Loop a JSON list and perform some action fakelist='["foo", "bar", "baz"]' echo $fakelist | jq -rc '.[]' | while read item; do echo $item done Handy dandy.
We currently have our domestic phone plan through US Mobile. We have their unlimited plan, which includes a free 10GB of international data. Before traveling, we added a new eSim to our phone for the destination. It was so simple to setup, and worked great. Maybe worth noting that it only provided data, but what do we need calling and texting for? We can do all communication over the internet.
I was interested in exploring NHTSA Vehicle Data without worrying about the limits placed on their API. They make available a backup of their MS Sql Server database as a .bak file. However, I don’t know Sql Server, and I don’t really want to learn it too much. Also, it’s not necessarily “fun” to run Sql Server locally on Linux.
Here’s a pretty rudimentary solution I came up with. It uses Docker to run a Sql Server container.
I like to be able to build an AWS Lambda in golang and be able to test it locally.
The AWS Lambda sets some environment variables. Use those to check if you’re in the Lambda runtime.
package main import ( "context" "fmt" "os" "github.com/aws/aws-lambda-go/lambda" ) type MyEvent struct {} var isLambda bool func init() { runtime := os.Getenv("AWS_LAMBDA_RUNTIME_API") if runtime != "" { isLambda = true } } func run(ctx context.
Here’s a simple example.
The lambda function and its requirements are in lambda/.
A null_resource is responsible for triggering a rebuild. The trigger calculates a base64sha256 on each file in lambda/, and concatenates the results together into a single string. Any time files are added, removed, or modified inside lambda/, the null resource is rerun. The provisioner removes any existing build result directory out/, then installs the packages with pip, and copies source files into the build result dir.
walterwanderley/sqlc-grpc is a very interesting project that builds on the output of kyleconroy/sqlc to generate a functional REST/JSON and GRPC server. When I started toying with it, I was a little disappointed that I couldn’t easily add any application logic outside of my SQL. This could be great for scaffolding a project, but then it seemed like I might not be able to add anything like validation.
Having had some more time to explore the project, now, I see that there is some way to be able to add a layer between the communication layer and the database layer.
First, compile an SQLite extension.
Then, register a new SQL Driver:
package main import ( "database/sql" "log" "github.com/mattn/go-sqlite3" ) func main() { sql.Register("sqlite3-uuid", &sqlite3.SQLiteDriver{ ConnectHook: func(conn *sqlite3.SQLiteConn) error { err := conn.LoadExtension("uuid", "sqlite3_uuid_init") if err == nil { return nil } return err }, }) db, err := sql.Open("sqlite3-uuid", ":memory:") if err != nil { log.Fatal(err) } defer db.Close() } There were two critical pieces that had me stuck for too long.
Uh, I don’t know C… at all. How on earth do I compile an extension for SQLite3?!
Here’s how I compiled and used the UUID extension on Ubuntu.
Install libsqlite3-dev Clone the SQLite repository. Techinically it’s a Fossil project, but imma just clone from Github. Compile. Optional. Move to a more global location. sudo apt install libsqlite3-dev git clone https://github.com/sqlite/sqlite gcc -fPIC -shared sqlite/ext/misc/uuid.c -o uuid.so -lm sudo mv uuid.so /usr/lib/uuid.
The storage drive on my Framework failed this weekend. Thankfully, former me was wise enough to schedule nightly backups with Restic, which I was able to restore from.
Problem 🔗My SSD failed on my Framework laptop this weekend. I had been working on it, plugged it in and left for an errand, and when I came back, the fan was on high and the screen was black. I shutdown the computer, and when I turned it back on, I got a blue screen on startup that said:
I was trying to deploy my first Nomad job that queried values out of Vault to set environment variables. The nomad logs kept indicating that the token couldn’t renew-self, getting permission denied. I was able to use the token that my Nomad Client was given and renew-self, so I was very confused.
As it turns out, the derived token that is used for the job also calls renew-self! I needed to give the extra line to the policy to allow the job token to renew itself.