The Random Walk Blog

2024-11-21

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

From Blinking LEDs to Real-Time AI: The Raspberry Pi’s Role in Innovation

The Raspberry Pi, launched in 2012, has entered the vocabulary of all doers and makers of the world. It was designed as an affordable, accessible microcomputer for students and hobbyists. Over the years, Raspberry Pi has evolved from a modest credit card-sized computer into a versatile platform that powers innovations in fields as diverse as home economics to IoT, AI, robotics and industrial automation. Raspberry Pis are single board computers that can be found in an assortment of variations with models ranging from anywhere between $4 to $70. Here, we’ll trace the journey of the Raspberry Pi’s evolution and explore some of the innovations that it has spurred with examples and code snippets.

The Early Days: A Learning Platform

Founder Eben Upton shared that Raspberry Pi began as a foundation aimed at encouraging young students to pursue computer science as a major. The first generation of the Raspberry Pi was built to teach programming and computing to a new generation. It included basic GPIO (General-Purpose Input/Output) pins, which allowed users to interact with external devices. Even today, GPIO programming remains at the core of many Raspberry Pi projects. Thanks to the Raspberry Pi, the days of computer hardware being a black box of information are gone as a new generation tinkers around with the single board computer hardware. Even the founder of Raspberry Pi is shocked at the things people do with their RPis. “People do not do with their Raspberry Pi what I expected them to do. I expected people to write computer games because that’s what I did when I was a child. In practice, people build robots. The interesting thing: kids find moving atoms around in the world much more interesting than moving pixels around on the screen.”

Here's a simple Python code snippet to blink an LED using the GPIO pins, showcasing the foundation of physical computing with the Raspberry Pi:

import RPi.GPIO as GPIO
import time

# Set up GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.OUT)  # Pin 18 as output

# Blink the LED
for i in range(5):
    GPIO.output(18, GPIO.HIGH)
    time.sleep(1)
    GPIO.output(18, GPIO.LOW)
    time.sleep(1)

# Clean up GPIO
GPIO.cleanup()

This basic example of using GPIO to control hardware paved the way for a massive community of makers, educators, and hobbyists who leveraged the Pi's capabilities for various projects.

Raspberry Pi as a Platform for IoT

With each iteration, the Raspberry Pi’s capabilities expanded. The addition of Wi-Fi and Bluetooth in the Raspberry Pi 3 marked a turning point, making it suitable for IoT projects. Its compact size, affordability and flexibility enabled collecting, processing, and transmitting data from sensors and devices in smart home systems, industrial automation, and environmental monitoring setups. The Pi became an ideal candidate for IoT applications, enabling users to connect sensors and control devices remotely. Using the Raspberry Pi as an IoT hub allows developers to implement data-driven applications that communicate over protocols like MQTT and HTTP, facilitating real-time control and remote monitoring across devices.

At Random Walk we pioneered an initiative to monitor sound pollution across the globe. You can monitor the network here - Noise Monitoring. You can read more about the set up on our blog about real time decibel mapping - Monitoring Sound Pollution: An Innovative Approach with Real-Time Decibel Mapping. Below is the code to set up your own noise sensor and become a part of this global network of noise monitoring devices.

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>
#include <ArduinoJson.h>
#include <TimeLib.h>
#include <WiFiUdp.h>
#include <time.h>

const char* WIFI_SSID = ""; >>>> enter your wifi ssid / wifi name here (within the double quotes)
const char* WIFI_PASSWORD = "";  >>>> enter wifi password here (within the double quotes)


WiFiClient wifiClient;
HTTPClient http;

const int sampleWindow = 50;
const float referenceVoltage = 0.0048;
const int delayTime = 1000;

const double latitude = ; //enter the latitude 
const double longitude = ; //enter the longitude

const String device_id = "XXXXXX"; //replace XXXX with your device id

String getCurrentDateTimeIST(); // New combined date and time function

void setup() {
    Serial.begin(115200);
    connectToWiFi();
    configTime(0, 0, "pool.ntp.org", "time.nist.gov");
    while (time(nullptr) < 8 * 3600 * 2) {
        Serial.print(".");
        delay(500);
    }
    Serial.println("
Time synchronized");
}

void loop() {
    if (WiFi.status() != WL_CONNECTED) {
        connectToWiFi();
    }

    float decibels = measureDecibels();
    String currentDateTimeIST = getCurrentDateTimeIST(); // Get combined date and time

    DynamicJsonDocument doc(256);
    doc["deviceId"] = device_id;
    doc["decibelReading"] = decibels;
    doc["timestamp"] = currentDateTimeIST; // Send the combined timestamp
    doc["latitude"] = latitude;
    doc["longitude"] = longitude;

    String output;
    serializeJson(doc, output);

    sendDataToServer(output);

    delay(delayTime);
}

void connectToWiFi() {
    Serial.print("Connecting to Wi-Fi");
    WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
    while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
    }
    Serial.println("
Connected to Wi-Fi");
}

float measureDecibels() {
    unsigned long startMillis = millis();
    unsigned long endMillis = startMillis + sampleWindow;
    float peakToPeak = 0;
    int signalMax = 0;
    int signalMin = 1023;

    while (millis() < endMillis) {
        int sensorValue = analogRead(A0);
        if (sensorValue > signalMax) {
            signalMax = sensorValue;
        }
        if (sensorValue < signalMin) {
            signalMin = sensorValue;
        }
    }

    peakToPeak = signalMax - signalMin;
    float voltage = peakToPeak * (3.3 / 1024.0);
    float rmsVoltage = voltage / sqrt(2);
    float decibels = 20 * log10(rmsVoltage / referenceVoltage);

    if (rmsVoltage < referenceVoltage * 0.01) {
        decibels = 0;
    }

    Serial.println("Decibels: " + String(decibels));
    return decibels;
}

void sendDataToServer(String data) {
    Serial.println("Attempting to send data: " + data);
    http.begin(wifiClient, "https://172.16.0.95:5000/api/v1/datalog");
    http.addHeader("Content-Type", "application/json");
    int httpResponseCode = http.POST(data);
    
    Serial.println("HTTP Response code: " + String(httpResponseCode));
    
    if (httpResponseCode > 0) {
        String response = http.getString();
        Serial.println("Server Response: " + response);
    } else {
        Serial.println("Error on sending POST: " + http.errorToString(httpResponseCode));
    }

    http.end();
    Serial.println("HTTP connection closed");
}
   
}

AI and Edge Computing with Raspberry Pi

The Raspberry Pi 4, with its increased RAM (up to 8GB) and powerful quad-core processor, opened doors to AI and machine learning (ML) applications. The addition of USB 3.0 allows for faster data transfer, making it suitable for edge computing applications where real-time data processing is crucial. These upgrades enable faster data processing, reduced latency, and real-time decision-making, essential for deploying AI models on the edge.

The Random Walk team has brought AI to new heights by running an LLM agent on a Raspberry Pi 4 - Jarvis. Learn the story of how we made this small device power mighty AI in our experiment, Tiny Pi, Mighty AI.

Using pre-trained models with frameworks like TensorFlow Lite, developers can now run image recognition, natural language processing, and predictive analytics on the Pi 4, creating low-cost, scalable solutions for applications in smart cities, industrial IoT, and remote monitoring, all without relying on cloud-based resources. Due to an ongoing global chip shortage, the Raspberry Pi Foundation has been facing challenges in keeping up with demand for its popular, affordable microcomputers. In an interview with The Verge, CEO Eben Upton shared insights into how Raspberry Pi has managed to navigate this crisis, addressing limited production capacity, increased lead times, and price adjustments. Despite these setbacks, Upton highlighted the sustained enthusiasm for the platform among DIY enthusiasts, educators, and developers, underscoring Raspberry Pi’s commitment to accessibility and potential future advancements once supply chains stabilize.

The Raspberry Pi’s evolution from a simple educational tool to a powerful IoT and AI platform underscores its adaptability and relevance. Its journey aligns with technological advances, fostering innovations in countless fields. Whether you’re a student, developer, or industry professional, the Raspberry Pi provides an accessible entry point into the world of edge computing and innovation.

Related Blogs

Refining and Creating Data Visualizations with LIDA

Microsoft’s Language-Integrated Data Analysis (LIDA) is a game-changer, offering an advanced framework to refine and enhance data visualizations with seamless integration, automation, and intelligence. Let’s explore the key features and applications of LIDA, and its transformative impact on the data visualization landscape. LIDA is a powerful library designed to effortlessly generate data visualizations and create data-driven infographics with precision. What makes LIDA stand out is its grammar-agnostic approach, enabling compatibility with various programming languages and visualization libraries, including popular ones like matplotlib, seaborn, altair, and d3. Plus, it seamlessly integrates with multiple large language model providers such as OpenAI, Azure OpenAI, PaLM, Cohere, and Huggingface.

Refining and Creating Data Visualizations with LIDA

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

Optimizing a website for performance is essential to enhance user experience and boost search engine rankings. Two critical metrics from Google’s Core Web Vitals (CWV)—Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS)—play a significant role in measuring and improving a site’s performance. These metrics outline the key strategies for optimization and highlight the observed impact on both mobile and desktop performance.

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

Building efficient and scalable applications often requires balancing responsibilities between the frontend and backend. When tasks like report generation are managed solely on the frontend, it can lead to performance bottlenecks, scalability issues, and user experience challenges. Transitioning to a balanced architecture can address these limitations while improving overall system efficiency.

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Text-to-speech (TTS) technology has evolved significantly in the past few years, enabling one to convert simple text to spoken words with remarkable accuracy and naturalness. From simple robotic voices to sophisticated, human-like speech synthesis, models offer specialized capabilities applicable to different use cases. In this blog, we will explore how different TTS models generate speech from text as well as compare their capabilities, models explored include MARS-5, Parler-TTS, Tortoise-TTS, MetaVoice-1B, Coqui TTS among others. The TTS process generally involves several key steps discussed later in detail: input text and reference audio, text processing, voice synthesis and then the final audio is outputted. Some models enhance this process by supporting few-shot or zero-shot learning, where a new voice can be generated based on minimal reference audio. Let's delve into how some of the leading TTS models perform these tasks.

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

A Beginner’s Guide to Automated Testing

A cursory prompt to chatGPT asking for guidance into the world of automated testing, spits out the words Selenium and Taiko. This blog post will explore our hands-on experience with these tools and share insights into how they performed in real-world testing scenarios. But first what is automated testing? Automated testing refers to the process of using specialized tools to run predefined tests on software applications automatically. It differs from manual testing, where human testers interact with the software to validate functionality and identify bugs. The key USPs of automated testing are efficiency in terms of multiple repeat runs of test cases, integration with CI/CD pipelines like Github actions and reliability.

A Beginner’s Guide to Automated Testing
Refining and Creating Data Visualizations with LIDA

Refining and Creating Data Visualizations with LIDA

Microsoft’s Language-Integrated Data Analysis (LIDA) is a game-changer, offering an advanced framework to refine and enhance data visualizations with seamless integration, automation, and intelligence. Let’s explore the key features and applications of LIDA, and its transformative impact on the data visualization landscape. LIDA is a powerful library designed to effortlessly generate data visualizations and create data-driven infographics with precision. What makes LIDA stand out is its grammar-agnostic approach, enabling compatibility with various programming languages and visualization libraries, including popular ones like matplotlib, seaborn, altair, and d3. Plus, it seamlessly integrates with multiple large language model providers such as OpenAI, Azure OpenAI, PaLM, Cohere, and Huggingface.

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

Core Web Vitals: How to Improve LCP and CLS for Optimal Site Performance

Optimizing a website for performance is essential to enhance user experience and boost search engine rankings. Two critical metrics from Google’s Core Web Vitals (CWV)—Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS)—play a significant role in measuring and improving a site’s performance. These metrics outline the key strategies for optimization and highlight the observed impact on both mobile and desktop performance.

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

From Frontend-Heavy to a Balanced Architecture: Enhancing System Efficiency

Building efficient and scalable applications often requires balancing responsibilities between the frontend and backend. When tasks like report generation are managed solely on the frontend, it can lead to performance bottlenecks, scalability issues, and user experience challenges. Transitioning to a balanced architecture can address these limitations while improving overall system efficiency.

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Exploring Different Text-to-Speech (TTS) Models: From Robotic to Natural Voices

Text-to-speech (TTS) technology has evolved significantly in the past few years, enabling one to convert simple text to spoken words with remarkable accuracy and naturalness. From simple robotic voices to sophisticated, human-like speech synthesis, models offer specialized capabilities applicable to different use cases. In this blog, we will explore how different TTS models generate speech from text as well as compare their capabilities, models explored include MARS-5, Parler-TTS, Tortoise-TTS, MetaVoice-1B, Coqui TTS among others. The TTS process generally involves several key steps discussed later in detail: input text and reference audio, text processing, voice synthesis and then the final audio is outputted. Some models enhance this process by supporting few-shot or zero-shot learning, where a new voice can be generated based on minimal reference audio. Let's delve into how some of the leading TTS models perform these tasks.

A Beginner’s Guide to Automated Testing

A Beginner’s Guide to Automated Testing

A cursory prompt to chatGPT asking for guidance into the world of automated testing, spits out the words Selenium and Taiko. This blog post will explore our hands-on experience with these tools and share insights into how they performed in real-world testing scenarios. But first what is automated testing? Automated testing refers to the process of using specialized tools to run predefined tests on software applications automatically. It differs from manual testing, where human testers interact with the software to validate functionality and identify bugs. The key USPs of automated testing are efficiency in terms of multiple repeat runs of test cases, integration with CI/CD pipelines like Github actions and reliability.

Additional

Your Random Walk Towards AI Begins Now