A while back I started exploring data from the Reverse Beacon
Network. My initial goal had been to come up with an ML model to
predict how many DX stations the local skimmer would receive – but
there was a lot of exploration of the data as well. I captured that
exploration in a series of notebooks, and set aside the project
after a while.
One of the things I never accomplished was a satisfying display of
where stations were being received from. I was aiming for something
that would show changes over time, as well as location. Yesterday I
was browsing through this Kaggle notebook for the BirdCLEF 2021
competition when I saw a cool map being generated from something
called a shape file. A bit of browsing through the Internet found
some great tutorials, and I think I have a better sense of what I
The Libre Space Foundation (and thus Polaris) was accepted for the
Google Summer of Code, and we had bunch of awesome students show
up in our chat room. A lot of work came out of that: coaching
students, evaluating their MRs, giving early feedback on proposals,
and helping them find their way through the codebase and the
problems. But these are definitely good problems to have!
Dig into more options for image augmentation, including Albumentation
Came up with a rough prototype for the Dishwasher Loading
Critic: a (poorly) trained model, sitting behind an API written
in Fast, with a copied bootstrap template. I was able to post
pictures to it from my phone & get some (poor) bounding boxes around
Still trying to figure out where I want to go with this project:
stick with Detecto, or move to PyTorch? I’d like to do the latter,
but I have a lot of learning to do there.
Got LSP-mode enabled for Emacs. Interesting, and I suspect this
will be a way forward for Emacs.
Tried Paperspace again after their upgrade, and WOW: it’s
blazingly fast to start up. I’m going to re-open my account with
Finally got Fedora 33 installed on an Intel NUC. The problem had
been that wifi did not work after installation, even though it
worked during installation. Turns out there’s a bug where
wpa-supplicant is not installed during installation; installing it
afterward by hand did the trick.
Learned about nftables…huh.
First prototype of anemometer working – I’m now able to get RPM
read and displayed in Grafana. Apparently, the best option open to
me for calibrating this thing is to use a car: hold it out the
window, go at a set speed, and take measurements.
Began Chapter 9 of the FastAI book. This is on tabular
learning, which is really interesting; I think this is the sort of
approach I’d want to take for loostmap, my attempt to predict
HF propagation by looking at data from the Reverse Beacon Network
(I picked that project name from a random name generator…I really
need something that makes more sense.)
Talked to my manager about the possibility of looking for DS/ML
projects at work. Apparently there’s one team he knows of that’s
looking into a project in this area, and the possibility exists to
work with them for a bit. 🤞
My father-in-law finished a prototype of our anemometer; he’s a
retired millwright, so he actually knows what he’s doing. (puts
popsicle sticks and yarn away)
A few contests entered. Closer to getting my WAS – only missing
Maine and Nebraska, and state contests for those are coming up in
the next few months.
A while back, I started having problems with the output of Venus, a
planet-like aggregator I use to read a bunch of things. The symptoms
were broken characters for things like apostrophes, quotes and so on
– which rendered the output nearly unusable. I dug into it,
but couldn’t resolve the problem…so I resorted to a bletcherous hack
(cron job to copy the file to my laptop, and view it with
file:///...) and blamed Python 2.
Today I came across the same problem but manifested in another set of
files. This time I managed to find the answer:
AddCharset UTF-8 .htm .html .js .css
To be clear, I already:
had made sure that the headers for the file included Content-Type: text/html;charset=utf-8
had made sure the html file had <meta charset="utf=8">
I’ve been interested in machine learning for a while now. Like a lot
of things, my approach has been a bit scattered. I’m slowly learning
how to get better at that, but I still tend to veer around.
A couple of months ago, I decided to take the Fast.ai course
again. I had done a couple of lessons a year ago, but had not
followed it up. This time around, I saw that they not only had a new
version of the course, but a book as well. I ordered the book
(and another book as well), and got started on the Jupyter
notebooks that the book is based on.