
Andrew Womeldorf
Software Engineer
First, compile an SQLite extension.
Then, register a new SQL Driver:
package main import ( "database/sql" "log" "github.com/mattn/go-sqlite3" ) func main() { sql.Register("sqlite3-uuid", &sqlite3.SQLiteDriver{ ConnectHook: func(conn *sqlite3.SQLiteConn) error { err := conn.LoadExtension("uuid", "sqlite3_uuid_init") if err == nil { return nil } return err }, }) db, err := sql.Open("sqlite3-uuid", ":memory:") if err != nil { log.Fatal(err) } defer db.Close() } There were two critical pieces that had me stuck for too long.
Uh, I don’t know C… at all. How on earth do I compile an extension for SQLite3?!
Here’s how I compiled and used the UUID extension on Ubuntu.
Install libsqlite3-dev Clone the SQLite repository. Techinically it’s a Fossil project, but imma just clone from Github. Compile. Optional. Move to a more global location. sudo apt install libsqlite3-dev git clone https://github.com/sqlite/sqlite gcc -fPIC -shared sqlite/ext/misc/uuid.c -o uuid.so -lm sudo mv uuid.so /usr/lib/uuid.
The storage drive on my Framework failed this weekend. Thankfully, former me was wise enough to schedule nightly backups with Restic, which I was able to restore from.
Problem 🔗My SSD failed on my Framework laptop this weekend. I had been working on it, plugged it in and left for an errand, and when I came back, the fan was on high and the screen was black. I shutdown the computer, and when I turned it back on, I got a blue screen on startup that said:
I was trying to deploy my first Nomad job that queried values out of Vault to set environment variables. The nomad logs kept indicating that the token couldn’t renew-self, getting permission denied. I was able to use the token that my Nomad Client was given and renew-self, so I was very confused.
As it turns out, the derived token that is used for the job also calls renew-self! I needed to give the extra line to the policy to allow the job token to renew itself.
I received an error while trying to launch a Nomad job with the Exec driver on my cluster of Raspberry Pi 4’s. Something about CGroups.
The Pi4 does not enable the memory cgroup by default, which the exec driver requires to execute correctly. To validate this assertion:
$ cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled ... memory 0 99 0 ... To fix, on each node, an update to /boot/cmdline.txt needed to be made, and a reboot needed to occur.
After far too many days of failure, I got an NFS share working with the CSI Plugin running on Nomad (version 1.3.3)!
I have a three-node pi cluster running Nomad. I created the NFS share on the pi that has an external SSD plugged into it, and was able to create new files and directories on it from all three pi’s, after mounting the NFS share. I had some odd permissioning things, where I had to use sudo on the node sharing the drive in order to make a directory, but then everyone could use it.
I’ve recently begun volunteering some of my time at church to help run the sound board for worship services. The church has a digital board, which is foreign to me, and there’s been a relatively steep learning curve to get up to speed.
Given that I’m not there where I can learn and practice on the board every week, I need to take pretty good notes. Besides that, I like to keep notes about each service as we do rehearsals, so that I can keep the important aspects to highlight written down for reference.
#!/usr/bin/env bash FAKE_API_RESPONSE='{"name":"package1.file1.js","content":"var foo = {\n\t\"bar\" = \"banana\"\n}\nconsole.log(foo)"}' echo $FAKE_API_RESPONSE | jq -r '[.name,.content] | @tsv' | while IFS=$'\t' read -r apiname script; do dirname="./$(echo "$apiname" | cut -d '.' -f 1)" filename="$(echo $apiname | echo "$dirname/$(cut -d '.' -f 2).js")" mkdir -p $dirname echo "$script" | sed 's/\\n/\ /g' | sed 's/\\t/\ /g' > $filename done I’m working with an API that returns several JSON objects, each object as a new line.
The API endpoint I’m querying returns a JSON object, where some fields are either an empty string or a map[string]interface{}. The map[string]interface{} actually follows a consistent schema, but since the actual received value is not a consistent type, I’m having to put the type as interface{} and check those fields specially.
package main import ( "encoding/json" "log" ) type Foo struct { Name string `json:"name"` SillyField interface{} `json:"silly_field"` } func printSillyField(f Foo) { if f.
Code first, story later.
package main import ( "encoding/json" "fmt" "log" "math" "math/rand" "sync" "time" ) type inputType struct { offset int batchSize int } // sleep a random amount of time, and return errors sometimes func fakeAPI(input inputType) (json.RawMessage, error) { rand.Seed(time.Now().UnixNano()) n := rand.Intn(10) time.Sleep(time.Duration(n) * time.Millisecond) mod := math.Mod(float64(input.offset), 11) if mod == 0 { return json.RawMessage{}, fmt.Errorf("error: divisible by 11: %d", input.offset) } return json.RawMessage(fmt.Sprintf(`{"offset": %d}`, input.