Our Hackathon Quest to Build Something Real
Are you curious about what happens when technology meets the physical world?
So were we!
That's why we decided to take a break from our usual software projects and dive into a hackathon day dedicated to exploring the intersection between software, hardware and the physical world.
At Exogee, we seek to challenge ourselves with hackathon days that take us out of our comfort zone. This hackathon was no different, as the brief explains:
"We regularly build things that are virtual thought stuff, taking data from dutifully entered keystrokes and blasting it to various systems. Let's devote some time to interacting directly with the real world! The metric for success in this hackathon is that your build should either be physical (e.g. not a drawing on a typical screen), or should shape a physical interaction / process directly without the use of traditional data entry methods."
Ideas came and teams formed, full of energy, enthusiasm, and creativity.
From a plush seal that gives "seals of approval" to our ideas of a facial recognition door lock, we pushed the limits of what we could create in just one day.
With the hardware ordered and armed with our coding skills, we embarked on a journey to explore these ideas and challenges.
While the road wasn't always smooth, the end results were impressive.
Join us as we share our experiences from this epic hackathon day, and discover how we turned our curiosity into a tangible, real-world adventure.
Kevin's Project: Seal of Approval
Kevin’s idea is… well… I’ll let him explain:
The world is a harsh place, full of negativity and dream crushing reality. What if there was a friend you could always count on to have your back? One that (HONK HONK) supported your (GRUNT) ideas, even when they were (ORK ORK ORK) in their infancy (SNORT ORK ORK)?
His idea is to create a seal that listens to your project ideas, then encourages you to pursue them. The seal is constrained by its flippers and general ocean dwelling nature. Yet, it offers verbal encouragement regardless.
The idea is fun and playful but, it has some serious tech behind it using Open AI. Here is some of the technology in use:
Hardware Used
- Raspberry Pi 3B
- USB Microphone: https://core-electronics.com.au/mini-usb-microphone.html
- Speakers: https://littlebirdelectronics.com.au/products/raspberry-pi-usb-free-drive-speaker
Software Used
- NodeJS
- PicoVoice Porcupine Wake Word: https://picovoice.ai/platform/porcupine/
- PicoVoice Leopard for Speech to Text: https://picovoice.ai/platform/cat/
- Open AI API to generate responses: https://openai.com/product
- pico2wave for text to speech: https://www.openhab.org/addons/voice/picotts/
- aplay to play wav files: https://linuxhint.com/aplay-linux-command/
That is not to say that it was all straight forward. There were challenges along the way:
The first challenge I hit was that I was trying to use the Speaker NPM package (https://www.npmjs.com/package/speaker) to play sound, but I was on the 64 bit version of PiOS, so it wouldn't compile. I found a forked version that would build (https://www.npmjs.com/package/speaker-arm64), and it did work, but only exactly one time. I messed around with it for a while, then realised I could simply trigger the aplay command to avoid all of the complication entirely.
Then there was prompt engineering:
Another challenge was engineering a prompt that would get the OpenAI davinci-3 model to mention that it's a seal or that it lives in the ocean in its responses. It took some trial and error to land on a prompt that would work well with user generated input.
Check out the source code if you want to learn more.
Kye's Project: Jam
Kye’s idea was to build a small app that would convert musical notes into commands to use in other applications. As he played a musical note the computer would “respond” in a deterministic way.
He used an audio interface to bring the audio in to the PC, and a USB foot-pedal to control the sample app.
Kye had a number of challenges during development that he overcame:
Initially the idea was to write a command line app in rust, however challenges were faced with the FFT (fast-fourier transform) in rust and in node- so to prove the general concept and have something to demonstrate, I pivoted to the browser to use the p5.js library. The library has a nice set of built-in tools and doesn't require thinking about lower-level concepts like audio buffers or how the FFT results are structured.
Here is a video of the final app.
Patrick, Steve, Revanth, and Neil's Project: Soothing Lights
The idea behind Soothing Lights is to have a toy with built-in lights. These lights will change colour based on the level or pitch of external noises.
This toy is then used to distract children and help those who have issues with loud noises.
The team used a Raspberry Pi connected to some 5V WS2812 LED strips. To detect sound they used a high sensitivity sound microphone sensor.
They then wrote some code that when triggered by the sound sensor would start an animation with the lights. When noise returned to ambient levels a random pixel would change colour every second.
The project went smoothly (pun intended), but Patrick commented:
We tried using some free open source software to implement the project which ended up being a little too restrictive.
In the end the team wrote their own code to get the animation just right and the video below shows what they produced.
Juan, Siow, Mack, and Josephine's Project: Facial Recognition Door Lock
The team’s goal was to create a Facial Recognition Door Lock. The lock would recognise a face and would open the door using a swipe card on a motor. The team were surprised about how easy it was to get face recognition working on a Raspberry Pi:
Face recognition is usually regarded as a complex process that would not run on a small device as a Raspberry Pi. However, we have demonstrated that this process can actually run quite effectively on a Raspberry Pi.
A camera was attached to the Pi and swipe card lock with swipe card on a servo completed their project. Yet, it was not all smooth sailing, they hit some challenges:
Face Detection was easy, face recognition was not.
The team used OpenCV, and Python to power the project. They needed to feed images from the camera to the library allowing it to recognise their faces. They created a script to do that but soon hit trouble:
When we tried to use the module to recognise a face, we were not able to feed the video to it. Instead, we ended up taking a snapshot of the video, saving this as an image locally, and passing that image to the face recognition module.
The team quickly overcame the challenges and created a great project and working prototype.
Kishore’s Project: Obstacle avoiding Robot
Kishore’s idea was to create a simple 'house' robot that moves around its location. The robot pivots its head and targets its next location, avoiding obstacles along the way. His inspiration came from robot vacuums and Boston dynamics.
The robot uses Arduino and requires soldering of many off-the-shelf parts to build. Kishore listed his top 4 challenges on the day:
- I followed a youtube video that used a different version than the one I bought and the connection spots where different.
- Soldering really small components was hard.
- I found understanding what the red, black and white wires are for each component and where they should be soldered, as per diagrams.
- I couldn't figure out how to log the output for debugging, till Kevin came in and saved the day.
Taylor’s Project: Naturewatch Trail Camera
Taylor is a keen hiker, you can find him on the NSW trials taking photos of wildlife. He says:
Frequently, animals reveal themselves when I'm sitting and not making much noise. I thought it would be cool to build my own trail cam to capture photos when I'm not there.
And the idea was born.
He used a project called My Naturewatch which gave him the know how and software to build the device. But it wasn’t all plane sailing:
The Raspberry Pi camera module has a poorly documented "feature" where you can adjust the focus on the camera by turning it. Without this, the photos were all out of focus.Additionally, I was unable to run the Raspberry Pi on our local network, so I had to continually connect to the pi's wifi and then disconnect to read documentation.
In practice this is not an issue since I'll just connect to the Pi in the field.
Eventually, he was able to focus the lens and got some great photos.
Gavin’s Project: Human Movement Tracking with LiDAR
Gavin built an impressive demo which would display a mannequin on the screen. This would then tracks the subject’s human movements accurately in realtime. Gavin said that he wanted to explore:
The prevalence of cheaper and cheaper cutting-edge hardware bundled with our handheld mobile devices unlocks a world of possibilities. Late-model Apple iPhones come equipped with a sophisticated LiDAR capable of tracking objects in real space. This, coupled with iOS’s in-built ability to track joints - such as wrists, knees, elbows and neck - on the human body (dubbed “anchors” in AR-speak) in realtime means that a subject’s human body and its movement can be used as an input to any software running on the device providing they are in-shot of the camera. Upon first inspection, this technology appears to work great but how accurate is it really?
From the demo the answer is very accurate!
Using Apple hardware such as iPhone 13 Pro and Mac OS running on MacBook Pro. An Apple TV was then used to screencast the image from the phone onto a large HDMI-compatible TV.
With this hardware he was able to demonstrate real-time tracking yet, there were some challenges:
The most impressive part was that the subject’s movements were matched in near-realtime. The image capture, processing - and subsequent display of the mannequin on the screen - was lightning fast. When measuring the subject’s distance from objects (such as walls) the measurements were very accurate. When measuring the distance from limb (ie. hand) to torso, the distance measurement was a lot less accurate. It seems that iOS had some trouble locating some anchors (ie. body joints) with high-accuracy much of the time. The subject’s height measurement (with the two anchor points being from head to toe) was also not very accurate. Moving one’s wrist (and measuring only that anchor point in real space) along the z-axis also yielded results that were inaccurate to a degree of +/- ~75%.
He concluded:
The device was quite accurate when locating static objects in 3D space and measuring the distance to those objects from the iPhone. It was a lot less accurate when attempting to locate the subject’s body in real space despite the ultra-responsive tracking of the subject’s body.
Dylan’s Project: Air Pen
Dylan’s idea was to explore end-to-end ML tools, from capturing training data to deploying the inference model on an IoT device. He planned on doing this by creating a device that could write in midair. The Air Pen.
The tools he used on the project included the Arduino Nano 33 BLE Sense and the Edge Impulse Studio.
With so many new tools to learn, he faced some challenges:
I was unaware of the number of data streams (9!) that the inertial measurement unit (IMU) generates. I was also surprised that Edge Impulse can easily handle even more diverse blended training data, such as combining light and humidity or even all the available sensor data. Edge Impulse is much better suited to repetitive motion than singular movements. The air pen is limited by the similarity of many numbers and letters, such as O, 0 and 6.
Until the next Hackathon
Our hackathon day was a great success. Allowing us to step away from our projects and explore hardware in creative ways.
With our coding skills and new hardware, we were able to create some innovative projects.
We learned that when we step outside of our comfort zone, we can create some remarkable projects.
So, let's keep exploring and pushing the boundaries of what we can do with technology.
See you at the next hackathon, who knows what it will bring!