IoT, or the Internet of Things, is a technological field that makes it possible for users to connect devices and systems and exchange data over the internet. Through DZone's IoT resources, you'll learn about smart devices, sensors, networks, edge computing, and many other technologies — including those that are now part of the average person's daily life.
Building Your Own IoT Project: A Step-By-Step Guide
Building an IoT-based Waste Management System: A Software Architect's Guide
Deploying microservices in a Kubernetes cluster is critical in 5G Telecom. However, it also introduces significant security risks. While firewall rules and proxies provide initial security, the default communication mechanisms within Kubernetes, such as unencrypted network traffic and lack of access control, are inherently insecure. This insecurity could compromise sensitive data. Therefore, implementing additional security measures within each microservice pod is not just a recommendation but a crucial step to ensure secure communication within the cluster. So, additional configuration inside each application is needed. Istio provides a robust solution to these challenges by effectively managing communication between individual 5G telecom microservice pods. With its control plane, Istio automatically injects a sidecar proxy into individual microservices pods, ensuring secure and efficient communication. Let's dive deep. What Is Istio? Istio is a crucial open-source service mesh that seamlessly integrates with microservices-based applications, simplifying monitoring, management, and enforcing performance and security policies. It prevents overload, restricts unauthorized access, and secures data in transit. Its support system unifies and ensures smooth operations for microservices, significantly streamlining their management and ensuring performance and security requirements are met effortlessly. What Is a Sidecar Proxy? A sidecar proxy is a separate container that runs alongside a Kubernetes microservice pod. It is responsible for offloading functions required by all applications within Istio. The sidecar proxy, a powerful component of Istio Architecture, intercepts the application's incoming and outgoing network traffic. It enables telecom operators to apply policies and utilize the mentioned resiliency features, and it empowers operators to perform advanced functions at the interface point with the outside world, showcasing the capabilities of Istio Architecture. Architecture The backbone of Istio's architecture is significantly shaped by two crucial components, each playing a pivotal role in its functionality: The data plane, a pivotal part of Istio's architecture, comprises a set of proxies (deployed using Envoy, an open-source proxy for distributed applications) that run alongside microservices as sidecar containers. The control plane, a decisive element in Istio's architecture, manages the proxies and dictates their actions. Let's look at each component in more detail. It includes the following components: Pilot: Manages service discovery and traffic.Citadel: Manages security and enables secure communication.Galley: Validates and distributes configuration resources.Mixer: Handles policy enforcement and telemetry collection.Sidecar Injector: Automatically injects Envoy sidecar proxies into Kubernetes pods for easy integration. To explain how Istio Architecture works, we’ll use the example based on the above architecture diagram, the sidecar proxy deployed with Microservice-A and Microservice-B ensures seamless and efficient communication. The sidecar proxy intercepts network traffic, empowering the application to implement and enforce policies, utilize resiliency features, and enable advanced functions. When Microservice-A sends a request to Microservice-B, the sidecar proxy identifies the destination, forwards the request, and checks the service-to-service communication policy to determine if the call should go through based on security, performance, and reliability. This process of intercepting, forwarding, and checking ensures that the request is handled appropriately. If the request goes through, Microservice-B processes the request, prepares the response, and sends it back over the network, which is intercepted and forwarded by the sidecar proxy to the client and then to the destination application, Microservice A. Understanding the Importance of Istio Service Mesh for Kubernetes Microservices The Istio service mesh is essential in Kubernetes. While Kubernetes manages microservices, it doesn't handle traffic flow management, access policies, or telemetry data collection. Istio provides these capabilities without requiring changes to application code, making it an attractive solution for managing microservices in Kubernetes using sidecar containers. It can run in any distributed environment, providing a secure solution for cloud or on-premises applications. Istio supports Kubernetes distributions, including managed services like EKS and self-managed clusters. It also works with different application orchestration platforms and all microservices applications, including serverless architectures. Advantages of Istio Istio offers several critical benefits for Kubernetes and Istio-compatible platforms: Security: Enforces strong authentication and authorization requirements between microservices.Application performance: Efficiently routes traffic between microservices and handles retries and failovers.Observability: Collects telemetry data from individual microservices for detailed visibility into health and performance.Troubleshooting: Monitors each microservice individually to identify and address performance and security issues. Overall, Istio simplifies management for admins of modern, microservices-based applications. Configuration YAMLs (Yet Another Markup Language) The Service Mesh Control Plane manages proxies to route traffic, provides policy and configuration for data planes, and empowers administrators to define and configure various services. Once configured, the SMCP distributes necessary information to the service mesh's data plane, allowing proxies to dynamically adapt their behavior. Telecom Operators can install and run SMCP (Service Mesh Control Plane) using the configuration below: SMCP YAML YAML apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: full-install namespace: istio-system spec: version: v2.1 techPreview: meshConfig: defaultConfig: concurrency: 8 # Adjust according to the need proxy: runtime: container: resources: requests: cpu: 500m memory: 256Mi limits: # Adjust according to the need cpu: "1" memory: 1Gi tracing: sampling: 10000 # 0.01% increments. 10000 samples 100% of traces type: Jaeger gateways: ingress: # istio-ingressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 meshExpansionPorts: [] egress: # istio-egressgateway service: type: ClusterIP ports: - name: status-port port: 15020 - name: http2 port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 additionalIngress: some-other-ingress-gateway: {} additionalEgress: some-other-egress-gateway: {} policy: type: Istiod telemetry: type: Istiod addons: grafana: enabled: true kiali: name: kiali enabled: true install: # install kiali CR if not available dashboard: viewOnly: false enableGrafana: true enableTracing: true enablePrometheus: true jaeger: name: jaeger-production install: storage: type: Elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy indexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * ingress: enabled: true runtime: components: tracing.jaeger.elasticsearch: # only supports resources and image name container: resources: limits: cpu: 1 memory: 1Gi requests: cpu: 500m memory: 1Gi pilot: deployment: autoScaling: enabled: true minReplicas: 2 maxReplicas: 2 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: # Adjust according to the need cpu: "1" memory: 1Gi The Service Mesh Member Roll unequivocally identifies the projects associated with the Service Mesh control plane. Solely, projects enlisted on the roll are impacted by the control plane. Adding a project to the member roll links it to a specific control plane deployment. Telecom Operators can install and run SMMR (Service Mesh Member Roll) using the configuration below: SMMR YAML YAML apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <Micro services pods namespace> # namespace that needs be istio injected. Installation 1. Install the SMCP (Service Mesh Control Plane) as below. 2. Install the SMMR (Service Mesh Member Roll) as below. Conclusion Istio simplifies communication between 5G telecom microservices pods in a Kubernetes environment and enables seamless connectivity, control, monitoring, and security of microservice architectures across different platforms. It supports workloads in containers and virtual machines. With Istio, the future of Telecom IoT microservice pod architecture looks promising, with improved efficiency, security, and scalability.
ARM-based systems are ubiquitous in today's world. Most of our smartphones, tablets, smart speakers, smart thermostats, and even data centers are likely powered by an ARM-based processor. The difference between the traditional laptop using Intel or ARM-based x86 chips and ARM is that ARM processors have a smaller form factor, less power consumption, and come in a variety of flavors. Amongst the multitude of ARM processor offerings, we will pick the ARM Cortex-M series processor. We shall build a bare-metal operating system from scratch. We will use the arm-none-eabi toolchain and QEMU for rapid prototyping. The host system is Ubuntu 18.04 and both the toolchain and QEMU can be installed using the Ubuntu software repository. QEMU can be invoked with the below command line. It emulates the Stellaris board, which has 256K flash memory and 64K of SRAM. qemu-system-arm -M lm3s6965evb --kernel main.bin --serial stdio When you compile a typical C program, whether for ARM or Intel/AMD processors, the structure will look like the code below. The entry point for the program is at main. You may use a library function printf to print out a statement on a terminal console. C int main (int argc, char* argv[]) { printf("Hello World\"); return 0; } // gcc -o main main.c Underneath the hood, the compiler and linker add a C runtime library to your code which adds startup code, printf, etc., which makes your program run. In contrasting fashion, a vanilla bare-metal firmware has to implement its own startup code, create the linker file, and define an entry point for its code to run. The code block below defines a linker script. It defines the flash memory and RAM memory starting address and length. The linker takes the object code as input and performs relocation/copies different sections of the code at the appropriate address as defined in the linker file. C ENTRY(Reset_Handler) MEMORY { flash (rx) : ORIGIN = 0x00000000, LENGTH = 256K ram (rwx) : ORIGIN = 0x20000000, LENGTH = 64K } ..... SECTIONS { .text : { . = ALIGN(4); *(.isrvectors) *(.text) *(.rodata) *(.rodata*) . = ALIGN(4); _endflash = .; } > flash .data : { . = ALIGN(4); _start_data = .; *(vtable) *(.data) . = ALIGN(4); _end_data = .; } > ram AT > flash ..... } The interrupt vectors, text, and read-only section are loaded into the flash memory and our code runs directly from the flash. The mutable data is loaded into the RAM. C .align 2 .thumb .syntax unified .section .isrvectors .word vTopRam /* Top of Stack */ .word Reset_Handler+1 /* Reset Handler */ .word NMI_Handler+1 /* NMI Handler */ .word HardFault_Handler+1 /* Hard Fault Handler */ .word MemManage_Handler+1 /* MPU Fault Handler */ .word BusFault_Handler+1 /* Bus Fault Handler */ .word UsageFault_Handler+1 /* Usage Fault Handler */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word SVC_Handler+1 /* SVCall Handler */ .word DebugMon_Handler+1 /* Debug Monitor Handler */ .word 0 /* Reserved */ .word PendSV_Handler+1 /* PendSV Handler */ .word SysTick_Handler+1 /* SysTick Handler */ From the Interrupt Service Routine Vectors, Reset_Handler, SVC_Handler and SysTick_Handler are of importance to us in this tutorial. The following register map is from the TI Stellaris LM3S6965 datasheet. It defines the registers which we shall use in our tiny OS. C #define STCTRL (*((volatile unsigned int *)0xE000E010)) // SysTick Control Register #define STRELOAD (*((volatile unsigned int *)0xE000E014)) // SysTick Load Timer Value #define STCURRENT (*((volatile unsigned int *) 0xE000E018)) // Read Current Timer Value #define INTCTRL (*((volatile unsigned int *)0XE000ED04)) // Interrupt Control Register #define SYSPRI2 (*((volatile unsigned int *)0XE000ED1C)) // System Interrupt Priority #define SYSPRI3 (*((volatile unsigned int *)0xE000ED20)) // System Interrupt Priority #define SYSHNDCTRL (*((volatile unsigned int *)0xE000ED24)) #define SVC_PEND() ((SYSHNDCTRL & 0x8000)?1:0) // SuperVisory Call Pending #define TICK_PEND() ((SYSHNDCTRL & 0x800)?1:0) // SysTick Pending Figure 1: Setup Flow Our Reset_Handler function is part of the startup code. Cortex-M architecture defines a handler mode and a thread mode. All exceptions are run in the handler mode and user code runs in the thread mode. On power-on reset, we are in thread mode. For our OS to function we require the following: Startup code: Reset handler and ISR VectorsSetting up exceptions for supervisor/software interrupt and the OS timerDefine common system calls such as Read/Write/Sleep and our custom create_task.Define a Task Control Block (TCB) struct and a circular linked list of TCB called Run Queue. ARM architecture defines a 24-bit SysTick timer and it is present in all Cortex-M3 SOCs. To make our OS generic and portable, we use the SysTick timer to generate periodic interrupts (~ 10 ms) for our OS Timer, which is also when our scheduler kicks in to manage tasks. The priority for SVC is kept higher than SysTick in our OS. Reset_Handler is defined below with a jump to c_entry(). C .thumb_func Reset_Handler: # add assembly initializations here LDR r0, =c_entry BX r0 #define TICK_PRIO(prio) {SYSPRI3 &=0x1FFFFFFF; \ SYSPRI3 |=(prio<<28); \ } #define SVC_PRIO(prio) {SYSPRI2 &=0x1FFFFFFF; \ SYSPRI2 |=(prio<<28); \ } The code snippet below shows sample tasks and their addition to our Run Queue of our OS. We define three tasks that are perhaps similar to the void loop() in Arduino where code runs forever. In our simple tasks, we print the task ID and then go to sleep for a variable amount of time. The write() and sleep() APIs are system calls. C typedef void (*CallBack)(); typedef struct _task_struct { CallBack func; unsigned int priority; }TASK_STRUCT; .... // Sample Tasks void task1() { while (1) { write("T1 ", 2); // yield cpu sleep(1000); } } ... // Define three tasks with different priorities. Lower number means higher priority. TASK_STRUCT task[3]; task[0].priority = 8; task[0].func = &task1; task[1].priority = 5; task[1].func = &task2; task[2].priority = 10; task[2].func = &task3; create_task((void*)&task, 3); ... The ARM Procedure Call Standard separates the group of ARM Registers which will be preserved or clobbered when a function call happens. Register R0-R3 holds the arguments to a function and R0 also holds the return value of the function. You will notice this in all exception-handling routines. The assembly code snippet below triggers an SVC interrupt and it jumps to the SVC Handler. C #define TASK_CREATE 31 .... create_task: @r0-r3 hold the arguments and are saved automatically. stmfd sp!,{lr} // Push Return Address onto fully descending stack push {r4-r11} // save r4-r11 SVC #TASK_CREATE // Call Supervisor Call to jump into Handler Mode pop {r4-r11} // Pop Back the saved register ldmfd sp!,{lr} // Pop LR mov pc,lr // return from the exception handler ... The code snippet below defines the SVC Handler. From the SVC instruction, we extract the immediate number, which in this case is #31, and use it in our C SVC Handler function which shall init our RunQueue linked list defined as RUNQ. C // SVC Interrupt Handler SVC_Handler: ... CPSID i // disable system interrupts .. // Extract SVC Immediate value ldr r1,[sp,#28] ldrb r1,[r1,#-2] BL C_SVC_Hndlr // Branch to C SVC Handler CPSIE i // enable system interrupts BX LR // Jump to Return Address ... int C_SVC_Hndlr(void *ptr, int svc_num) { int ret = 0, len = 0; void *stck_loc = ptr; switch (svc_num) { case 2: { // Write System Call char *data = (char*)*(unsigned int *)(stck_loc); // R0 on stack len = *(unsigned int *)(stck_loc + 1); // R1 on stack put(data, len); // Write to Serial Termimal break; } case 4: // Sleep System Call ms_delay(*(unsigned*)ptr); // *ptr holds the delay value break; case 31: // Create Task System Call task_create((void *)stck_loc); break; } } After defining our RUNQ linked list, we arm the SysTick Timer, point our program counter to the starting address of the first function in our list, and exit out of the handler mode. C // Simple Scheduler void Scheduler(void) { uint8_t max_prio = 64; TCB *pt = RUNQ; TCB *next = RUNQ; // find a task which is not sleeping and not blocked do{ pt = pt->next; if((pt->priority < max_prio)&&((pt->is_blocked)==0)&&((pt->sleep)==0)){ max_prio = pt->priority; next = pt; } } while(RUNQ != pt); RUNQ = next; } When the SysTick timer expires, our scheduler function is invoked which picks the next task in our queue which is not sleeping or is not blocked and has a higher priority. Now with our OS implemented, it is time to compile/build our firmware and run it on QEMU. Figure 2: QEMU Output With our QEMU output, we see the task ID getting printed. Task T2 has the highest priority and gets picked by our scheduler. It prints its task id and goes to sleep while yielding the CPU. The scheduler then picks the next task T1 with a medium priority until it yields, and then finally T3 runs. Since T2 sleeps for double the amount of time than T1 and T3, we see T1 and T3 run again before T2 gets scheduled back, and we follow the starting pattern T2, T1, T3. Conclusion We have introduced a simple bare-metal OS that implements system calls and a simple round-robin scheduler to loop through all the tasks in the system. Our OS lacks locking primitives like semaphores and mutexes. They can be implemented by adding another linked list of waiting tasks. The mutex locks or unlock operations can be handled with a system call which when triggered disables the interrupts (scheduler), which allows for serialization of the code. If the lock is already held by another task, the calling task is added to the wait queue and is de-queued when the mutex unlock operation occurs. Overall, this tutorial provides insights into how firmware-based OS/RTOS internals work. It also serves as a template for the readers for their own OS implementation and expansion on the ideas of operating systems, process management, virtual memory, device drivers, etc.
If you're new to the DIY IoT community or even if you're a seasoned maker but need to spin up a quick prototype for something that collects some sensor data and takes some actions based on it automatically, you probably have an Arduino running some code somewhere in your workshop. Now, if you have been adding more sensors and more controls and more peripherals to your little system for a while till it's not so little anymore, or if you find yourself looking for real-time capabilities or just more power, it might be time to upgrade to a 32-bit ARM Cortex-M based chip such as one from the STM32 family. For the purposes of this tutorial, we will focus on the main advantages of making the switch and the high-level firmware changes needed for the same, along with code examples. I would suggest using the STM32 Discovery Board to play with and test the code before moving on to designing a custom PCB with an STM32 chip. IDE and Setup If you're used to using Arduino IDE for development, suddenly switching over to something more widely used in the industry like Keil Studio will probably be too much of a jump. A good middle ground would be the STM32CubeIDE. As a summary, let's outline the basic tools you will need to have to get started: STM32CubeIDE: Download linksSTM32CubeMX: This is an add-on to the STM32 IDE that provides an easy GUI for configuring the microcontroller. Download linkSTM32 development board with programming cable Here is a good quick-start guide from Digikey for installing and setting up the IDE and connecting to the development board. Next, we will get to the heart of it all, porting over the code. Porting the Firmware Peripheral Code Some of the main protocols we will cover in this tutorial based on how widespread they are include Digital Read/Write, I2C, ADC (for leading analog sensors for example), and PWMs. 1. Digital I/O This is relatively easy; you just have to replace the digitalWrite() and digitalRead() with the respective STM32 HAL functions. Here is a code example. C++ // Arduino code for Digital I/O pinMode(LED_PIN, OUTPUT); digitalWrite(LED_PIN, HIGH); int state = digitalRead(LED_PIN); C++ // STM32 HAL Code HAL_GPIO_WritePin(GPIOA, GPIO_PIN_5, GPIO_PIN_SET); GPIO_PinState state = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5); 2. PWM Controlling PWM-based outputs is relatively complicated unless you're using Arduino libraries that are built for specific modules. For example, if you want to know how to control an LED strip or servos, it's beneficial to know how to work with PWM signals. Here is an example of setting up a PWM output. In the graphical interface of your STM32CubeIDE, configure Timer2 to operate in PWM Mode and set CH1 as output.Set the RCC mode and configuration as shown in the image in the System Core settings.Hit "Generate Code" from the "Project" menu on the menu bar to auto-generate the code to configure the PWM signal. Here is a screenshot of what it looked like for me.Add some code in your main function to test the PWM output. C int main(void) { int32_t dutyCycle = 0; HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_TIM2_Init(); HAL_TIM_PWM_Start(&htim2, TIM_CHANNEL_1); while (1) { for(dutyCycle = 0; dutyCycle < 65535; dutyCycle += 70) { TIM2->CCR1 = dutyCycle; HAL_Delay(1); } for(dutyCycle = 65535; dutyCycle > 0; dutyCycle -= 70) { TIM2->CCR1 = dutyCycle; HAL_Delay(1); } } } Now, if you connect the GPIO pin attached to TIM2 to an oscilloscope, you'll see the PWM signal with the duty cycle you set! You can check which GPIO pin is attached to that timer using the configuration view for that timer; TIM 2 if you follow the example, as shown in the image below. 3. Analog Read Another commonly used function you've probably used your Arduino for is reading analog sensors. With an Arduino, it was as simple as using the AnalogRead(pin_number) function. On an STM32 though, it's not that much harder. You can follow the steps below. Go to the "Pinout & Configuration" tab. Enable ADC1 and select the channel connected to your analog sensor (e.g., ADC1_IN0 for PA0).Configure the ADC parameters as needed. From the Analog tab, select the ADC you want to use, and select one of the interrupts that don't show any conflicts; that is, they're not highlighted in red. If you go to the GPIO section, it will show which pin on the MCU it's connected to."Generate Code" as before for the configuration code.Here is some sample code for your main function to read the analog value: C int main(void) { HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_ADC1_Init(); HAL_ADC_Start(&hadc1); while (1) { if (HAL_ADC_PollForConversion(&hadc1, HAL_MAX_DELAY) == HAL_OK) { uint32_t adcValue = HAL_ADC_GetValue(&hadc1); printf("ADC Value: %lu\n", adcValue); } HAL_Delay(1000); } } 4. I2C A lot of industrial quality sensors, I/O expansion devices, multiplexers, displays, and other useful peripherals commonly communicate over I2C. On an Arduino, you probably used the Wire library to communicate with I2C peripherals. Let's dive into how to communicate with an I2C peripheral on an STM32. Go to the graphical interface, enable I2C1 (or another I2C instance), and configure the pins (e.g., PB6 for I2C1_SCL and PB7 for I2C1_SDA).Configure the I2C parameters as needed (e.g., speed, addressing mode). I kept the default settings for this example.Generate the code.Here is some sample code for sending and receiving data over I2C. C int main(void) { HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_I2C1_Init(); uint8_t data = 0x00; HAL_I2C_Master_Transmit(&hi2c1, (uint16_t)0x50 << 1, &data, 1, HAL_MAX_DELAY); HAL_I2C_Master_Receive(&hi2c1, (uint16_t)0x50 << 1, &data, 1, HAL_MAX_DELAY); while (1) { } } static void MX_I2C1_Init(void) { hi2c1.Instance = I2C1; hi2c1.Init.ClockSpeed = 100000; hi2c1.Init.DutyCycle = I2C_DUTYCYCLE_2; hi2c1.Init.OwnAddress1 = 0; hi2c1.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT; hi2c1.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE; hi2c1.Init.OwnAddress2 = 0; hi2c1.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE; hi2c1.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE; if (HAL_I2C_Init(&hi2c1) != HAL_OK) { Error_Handler(); } } Conclusion In this article, we covered interacting with peripherals using some of the most common communication protocols with an STM32. If you would like a tutorial on other communication protocols, or have questions about configuring your first STM32 controller, please leave a comment below.
The Internet of Things has become integral to our daily routines, and devices are increasingly becoming smart. As this domain expands, there's an urgent need to guarantee these software-enabled devices' security, productivity, and efficiency. Hence, the Rust programming language is becoming the second popular choice after C++ for IoT device developers. This article will explore why Rust is becoming a favored choice for embedded IoT development and how it can be effectively used in this field. In IoT development, C++ has always been a go-to solution when speaking about IoT and embedded systems. Also, this language has an experienced development community and is widely used by engineers worldwide. However, recently, Rust came into play and showed its potential. So, we decided to explore why developers keep leaning toward embedded programming with Rust over tried-and-proven C++. History of Rust Software Development Services Rust, a modern system programming language, was initially conceptualized by Mozilla and the broader development community. It was designed for secure, swift, and parallel application development, eliminating potential memory and security challenges related to embedded solutions & custom IoT development. Since its inception in 2006, Rust language has undergone many changes and improvements and was finally introduced as an open-source ecosystem in 2010. Beyond the development community, major corporations like Microsoft, Google, Amazon, Facebook, Intel, and GitHub also support and finance Rust, furthering its development and usage. This undoubtedly speeds up its growth and increases its attractiveness for use. Rust vs. C++ Dilemma: Why Everyone Is Shifting From C++ to Rust in Embedded System Creation Rust and C++ programming languages are powerful tools for high-performance application development. For embedded IoT applications, several crucial factors influence development speed, security, and reliability beyond the foundational software. Below are the Top 5 most significant factors: 1. Security and Memory Management A standout feature of Rust is its compile-time security system. This ensures that many memory-related issues, like memory leaks and buffer overflows, are detected and addressed during the compilation phase, leading to more dependable and maintainable code. Rust employs a unique ownership system and moves semantics that proficiently handles object lifetimes, mitigating data access conflicts. However, this uniqueness can raise the entry barrier, particularly for newer developers, who might find these techniques somewhat unconventional. The C++ language also provides memory control, but it requires more careful programming. It’s susceptible to pitfalls like memory leaks and unsafe data access if not handled precisely. 2. Performance Rust aims to be competitive in C++ performance. The Rust compiler generates efficient machine code, and thanks to a secure type of system, Rust can predictably optimize code. C++ also safeguards high performance and provides a wide range of tools for optimization. 3. Code Syntax and Readability Rust offers state-of-the-art and clean syntax that helps create readable and understandable code. The Rust template system (traits) makes the code more expressive, legible, and easily extendable. C++ has historical syntax, which may be less intuitive and readable for some developers. 4. Integration and Multitasking Rust provides a convenient way to integrate with C and C++ through a Foreign Function Interface (FFI), which makes it easier to port existing projects but still requires additional effort. The Rust tenure and type systems exclude “data race” and help create secure multitasking applications. Rust also supports threads and competitive multitasking from the box. C++ also provides multitasking but can be integrated with C code with little or no effort. 5. Ecosystem and Community Rust has an active and rapidly growing development community. Cargo–Rust’s dependency and build management system make development more convenient and predictable. C++ also has a large and experienced community and an extensive ecosystem of libraries and tools that exceed the volumes of Rust. As we can see, Rust offers IoT app developers advanced security features that prevent many common errors and result in more reliable, clear code. It also benefits from active community support and utilizes the Cargo system for efficient dependency management and compilation. At the same time, Rust provides numerous tools and out-of-the-box libraries that allow results comparable to those of C++ but with significantly less effort and code. Yet, Rust still trails C++ in ecosystem maturity, C integration, and accessibility for Rust software development beginners. Real-Life Case of Using Rust for IoT Device Development: Smart Monitoring System for Toddlers The Sigma Software team was engaged as a technical partner to help develop a product that simplifies diverse childcare routines for parents. Namely, we were to build software for a baby monitoring device connected to the ESP32-S3 MCU. Our team was looking for the best-fit solution that could provide us with everything needed for successful delivery: multitasking capabilities, a secure coding environment, and interfaces with a network, microphone, and speaker connections. We saw the potential of Rust to fulfill these requirements as it had a robust ecosystem that allowed us to integrate the required functionality without much effort. Even though we chose Rust as our primary tool, we also effectively integrated specific C and C++ libraries using the Foreign Function Interface (FFI). As a result, it took us just six months from the project’s kick-off to the release of its beta version. One month later, the solution was already on the market and available for purchase. Over the next half-year, we refined and expanded its functionalities, including remote control, regime planning, and smooth integration options into the user’s existing ecosystem. The functionality expansion went smoothly, without much effort, and without leaving behind the smell of code, thus reducing the need for refactoring to a minimum. This project, completed by a trio of developers in just over a year, has reached over 5,000 households, underscoring Rust's viability in IoT development. C++ vs. Rust: Final Thoughts Unlike C++, Using Rust in embedded systems creation has a learning curve. Yes, this requires more time at the start of the project, as developers need to learn the language's innovations and features. Yes, finding, refining, or partially porting the necessary libraries for use in a specific solution will take longer. But the result is beautiful and readable code that expands quickly. Hence, a productive, safe, and lightweight solution is needed for embedded IoT applications.
Network onboarding — the process through which new devices gain access to an organization's network— is a cornerstone of IT operations, affecting everything from security to user satisfaction. Traditionally, this process has been fraught with challenges, particularly at scale. In environments with hundreds or thousands of devices, manual onboarding can consume disproportionate amounts of time and resources. According to a study by Cisco, IT teams spend approximately 20% of their time managing device connectivity issues, highlighting the operational burden of current practices. Moreover, the scale of the problem becomes more apparent considering the proliferation of IoT devices. Gartner reports that by 2025, over 75 billion connected IoT devices will be in use worldwide. Each device, from the simplest sensor to complex industrial machinery, requires precise and secure network integration, a task that becomes exponentially difficult as network environments grow in complexity and scale. In this context, Artificial Intelligence (AI) and automation are not just enhancements but necessities in transforming network connectivity. These technologies promise to streamline the onboarding process, reduce human error, and enhance security protocols. The market for AI in network management alone is expected to reach $12 billion by 2023, according to a report from Markets and Markets, indicating a significant investment and interest in leveraging these technologies to address longstanding issues in network operations. By integrating AI and automation, organizations can anticipate and mitigate connectivity issues before they impact end users, customize onboarding procedures for different device types, and enforce security standards automatically. This transition is crucial for keeping pace with the rapid growth of networked devices and the evolving expectations of a digitally connected world. Current State of Network Onboarding Network onboarding is a critical IT process where new devices are registered and granted access to an organization’s network. This process encompasses the authentication, authorization, and configuration stages necessary for devices to securely communicate within the network. Despite its importance, traditional network onboarding is often hampered by manual interventions, non-standardized procedures, and inadequate security measures. Common Challenges in Traditional Network Onboarding Processes The conventional approach to network onboarding presents several challenges: Scalability issues: Manual onboarding processes are inherently non-scalable. As the number of devices increases, the workload and complexity of manually configuring each device multiply accordingly. A report by Network World indicates that companies often experience bottlenecks during major onboarding events, such as incorporating new business units or updating network infrastructure.Error-prone procedures: Human intervention in device setup and configuration is prone to errors. These mistakes can lead to misconfigurations, which, as per an IBM Security report, are responsible for nearly 95% of all network security breaches.Time-consuming: Onboarding can be a time-intensive process, particularly in large enterprises with thousands of devices. According to a survey conducted by TechRepublic, IT departments spend an average of 28 hours per week on network management tasks, including device onboarding. Impact of These Challenges on User Experience and Network Efficiency The repercussions of inefficient network onboarding are significant: User experience: Slow or faulty onboarding processes can lead to prolonged downtime for end-users, affecting productivity and satisfaction. A study by Forrester found that delays in network access are among the top complaints from new employees during onboarding.Network efficiency: Inefficient onboarding can strain network resources. Devices that are improperly integrated may consume excessive bandwidth or disrupt network segments, leading to performance degradation across the enterprise. The state of network onboarding, with its reliance on outdated methods and the accompanying challenges, underscores the need for a transformation in how organizations approach this essential function. The integration of AI and automation into network onboarding processes is not merely an upgrade; it is becoming a fundamental necessity to ensure scalability, security, and efficiency in modern network environments. Automation in Network Onboarding Automation technologies, particularly Robotic Process Automation (RPA) and orchestration tools are revolutionizing the network onboarding process by eliminating the need for manual intervention in repetitive and complex tasks. These technologies enable IT departments to automate the entire lifecycle of device management, from initial deployment to updates and security compliance. Explanation of Automation Technologies Robotic Process Automation (RPA): RPA involves configuring software robots to mimic human actions in interacting with digital systems. RPA can automate rule-based, repetitive tasks such as entering data, configuring settings, and performing routine checks. For network onboarding, RPA can quickly execute configurations across multiple devices, reducing the manual workload and minimizing human errors. Orchestration Tools: Orchestration involves managing interactions and automation across several IT systems. In network onboarding, orchestration tools can coordinate multiple automation tasks to streamline the setup and integration of new devices into the network. Tools like Ansible and Terraform are popular in this space, providing code-based infrastructure automation that ensures consistent and repeatable configurations. Benefits of Automation for Repetitive and Complex Onboarding Tasks The implementation of automation in network onboarding offers numerous benefits: Speed and efficiency: Automation significantly speeds up the onboarding process. According to a study by Gartner, automation can reduce the time required for network provisioning tasks by up to 90%. This efficiency is particularly beneficial in environments with high device turnover or rapid scaling needs. Accuracy and consistency: Automated processes are less prone to errors compared to manual configurations. A report from Deloitte highlights that automation can improve operational accuracy by up to 99%, ensuring that devices are configured correctly the first time, every time.Scalability: Automation makes it easier to scale network operations efficiently. Automated workflows can be replicated across countless devices without additional time costs, supporting growth without corresponding increases in IT staffing. Integration of Automation With Existing Network Management Systems Integrating automation technologies into existing network management systems is crucial for maximizing their benefits. This integration allows for: Centralized management: Administrators can manage and monitor automated tasks from a central platform, improving oversight and control over the network.Enhanced security: By automating security configurations and compliance checks, networks remain protected against vulnerabilities consistently and in real time.Data-driven decisions: Automation tools can generate detailed logs and reports, providing insights into network performance and helping IT teams make informed decisions about infrastructure and resource allocation. For example, using Ansible to automate network device configurations involves creating Playbooks that define the desired state of network settings. These Playbooks can then be executed across the entire network, applying consistent configurations, executing security policies, and ensuring that all devices comply with organizational standards, all without manual input. In conclusion, the strategic application of RPA and orchestration in network onboarding not only enhances operational efficiency but also transforms the capacity of networks to grow and adapt in a secure and manageable manner. This automation is increasingly seen as a critical component of modern network management strategies, pivotal in driving the next wave of digital transformation. AI and Automation Synergies The convergence of Artificial Intelligence (AI) and automation represents a transformative leap forward in network onboarding. AI enhances the capabilities of automation by introducing predictive analytics and adaptive decision-making into the process, allowing for more dynamic and intelligent system management. How AI and Automation Complement Each Other AI and automation are synergistic technologies that combine AI's decision-making capabilities with automation's efficiency. AI can analyze data from network operations to identify patterns and predict issues before they arise. Automation can then take immediate action based on AI’s insights to adjust configurations or address potential problems without human intervention. For instance, AI can predict bandwidth needs and instruct automation tools to adjust access point parameters in real time to meet demand. Systems Where AI Inputs Direct Automation Tasks In network onboarding, systems integrated with both AI and automation use AI to analyze incoming device data and make decisions about how they should be onboarded. For example, an AI system might analyze the security profile of a device and decide which network segment it should connect to, while automation tools carry out the actual connection process. Examples of AI and Automation Working Together to Improve Network Connectivity One practical example is the use of machine learning models to classify devices based on usage patterns and security risks. Once classified, automated scripts are triggered to configure network access accordingly. For example, high-risk devices could be automatically restricted to accessing only certain parts of the network. Technical Deep Dive: Implementing AI and Automation in Onboarding AI models, particularly machine learning algorithms, play a critical role in enhancing network onboarding. Algorithms such as decision trees, support vector machines, or neural networks can be trained on historical data to predict device behavior or identify potential security threats. Step-By-Step Guide on Implementing These Models Using a Specific Technology Stack Let's consider a scenario where we use Python and TensorFlow to predict network load and Ansible for automation: Data collection: Collect historical data on network usage patterns, device types, and onboarding times.Model training: Use TensorFlow to build a neural network model that predicts network load based on time of day and device type. Train the model with collected data. import tensorflow as tf from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential Example model: Python model = Sequential([ Dense(10, activation='relu', input_shape=(num_features,)), Dense(10, activation='relu'), Dense(1) ]) model.compile(optimizer='adam', loss='mse') model.fit(X_train, y_train, epochs=10) 3. Automation with Ansible: Create an Ansible playbook that adjusts network settings based on model predictions. Python - hosts: network_devices tasks: - name: Adjust network configuration command: "adjust_network.sh {{ prediction }" Using the above TensorFlow model and Ansible playbook, integrate them by using an API that retrieves model predictions and feeds them into the Ansible playbook for execution. Discussion on the Use of APIs and Other Interfaces for Automation Tools APIs play a crucial role in the integration of AI and automation by allowing systems to communicate seamlessly. For example, RESTful APIs can be used to send AI predictions from a central server to network devices managed by Ansible. Security Considerations: Implications of AI and Automated Onboarding The integration of AI and automation introduces specific security challenges, particularly in data privacy and system integrity. AI systems must be trained on secure, anonymized data to prevent leakage of sensitive information. Some best practices for ensuring Data Privacy and Network Security in an AI-enhanced, automated environment are: Data encryption: Encrypt data used for training AI models to ensure that sensitive information remains secure.Regular audits: Conduct regular security audits of AI and automation tools to detect vulnerabilities.Access controls: Implement strict access controls for systems handling AI and automation tasks to prevent unauthorized access. By following these practices, organizations can mitigate potential security risks associated with AI and automation in network onboarding. Future Trends and Innovations As AI and automation technologies evolve, their integration into network onboarding is expected to become even more sophisticated. The advent of quantum computing and advanced machine learning algorithms, such as deep reinforcement learning, promise to further enhance the predictive capabilities and efficiency of network systems. These technologies could enable real-time, adaptive network management that not only anticipates demand and potential issues but also dynamically reconfigures the network without human intervention. Predictions for the future landscape of network connectivity suggest a move towards fully autonomous networks, where AI-driven systems manage all aspects of network operations. This could lead to significant improvements in network resilience, security, and user experience, as these intelligent systems respond instantly to changes and threats. Conclusion The integration of AI and automation into network onboarding processes represents a significant leap forward in network management, addressing many of the traditional challenges associated with scalability, efficiency, and security. As these technologies continue to advance, their role in network architecture will only grow, making them indispensable tools for network architects and developers. Organizations are encouraged to invest in these technologies to not only streamline their operations but also to future-proof their networks against increasingly complex demands. Embracing AI and automation is not merely an enhancement — it's becoming essential for maintaining competitive advantage and operational effectiveness in the digital age. References Cisco, "Network Management: Challenges and Solutions," 2021.Gartner, "Forecast: IoT Connected Devices," 2020.Markets and Markets, "AI in Network Management Report," 2022.Network World, "Scaling Network Operations: New Paradigms for IT Teams," 2021.IBM Security, "Cost of a Data Breach Report 2020."TechRepublic, "IT Network Management and Post-Pandemic Challenges," 2021.Forrester, "The Employee Experience Imperative," 2020.Deloitte, "Automation in Networking: Future of Network Administration," 2021.
ARM CPUs often outperform x86 CPUs in scenarios requiring high energy efficiency and lower power consumption. These characteristics make ARM preferred for edge and cloud environments. This blog post discusses the benefits of using Apache Kafka alongside ARM CPUs for real-time data processing in edge and hybrid cloud setups, highlighting energy efficiency, cost-effectiveness, and versatility. A wide range of use cases are explored across industries, including manufacturing, retail, smart cities, and telco. Apache Kafka at the Edge and Hybrid Cloud Apache Kafka is a distributed event streaming platform that enables building real-time streaming data pipelines and applications by providing capabilities for publishing, subscribing to, storing, and processing streams of records in a scalable and fault-tolerant way. Various examples exist for Kafka deployments on the edge. These use cases are related to several of the above categories and requirements, such as low hardware footprint, disconnected offline processing, hundreds of locations, and hybrid architectures. Use Cases for Apache Kafka at the Edge I have worked with enterprises across industries and the globe on the following scenarios: Public sector: Local administration in each city, smart city projects including public transportation, traffic management, integration of various connected car platforms from different carmakers, cybersecurity (including IoT use cases such as capturing and processing camera images)Transportation, logistics, railway, and aviation: Track and trace, Kafka in the trains for offline and local processing/storage, traveler information (delayed or canceled flight/train/bus), real-time loyalty platforms (class upgrade, lounge access)Manufacturing (automotive, aerospace, semiconductors, chemical, food, and others): IoT aftermarket customer services, OEM in machines and vehicles, embedding into standard software such as ERP or MES systems, cybersecurity, a digital twin of devices/machines/production lines/processes, production line monitoring in factories for predictive maintenance/quality control/production efficiency, operations dashboards and line wellness (on-site for the plant manager, and aggregated global KPIs for executive management), track and trace and geofencing on the shop floorEnergy, utility, oil, and gas: Smart home, smart buildings, smart meters, monitoring of remote machines (e.g., for drilling, windmills, mining), pipeline and refinery operations (e.g., predictive failure or anomaly detection)Telecommunications/media: OSS real-time monitoring/problem analysis/metrics reporting/root cause analysis/action response of the network devices and infrastructure (routers, switches, other network devices), BSS customer experience, and OTT services (mobile app integration for millions of users), 5G edge (e.g., street sensors)Healthcare: Track and trace in the hospital, remote monitoring, machine sensor analyticsRetailing, food, restaurants, and banking: Customer communication, cross-/up-selling, loyalty system, payments in retail stores, perpetual inventory, Point-of-Sale (PoS) integration for (local) payments and (remote) CRM integration, EFTPOS (Electronic funds transfer at point of sale) Benefits for Kafka at the Edge and in the Cloud Deploying the same technology in hybrid environments is not a new idea. Project teams see tremendous benefits when using Kafka at the edge and in the data center or cloud: Same APIs, concepts, development tools, and testingSame architecture for streaming, storing, processing, and connecting systems, even if at a very different scaleReal-time synchronization between multiple environments included out-of-the-box via the Kafka protocol Let's explore how ARM CPUs fit into this discussion. What Is an ARM CPU? An ARM CPU refers to a family of CPUs based on the Advanced RISC Machine (ARM) architecture, which is a type of Reduced Instruction Set Computing (RISC) architecture. ARM CPUs have a reputation for their high performance, power efficiency, and low cost. These characteristics make them particularly popular in mobile devices such as smartphones, tablets, and an increasingly wide range of other devices like IoT (Internet of Things) gadgets, servers, and even desktop computers. The ARM architecture performs operations with a smaller number of computer instructions, allowing it to achieve high performance with lower power consumption compared to more complex instruction set computing (CISC) architectures like x86 used by Intel and AMD CPUs. This efficiency is a key advantage for battery-powered devices, where energy conservation is critical. ARM Holdings, the company behind the ARM architecture, does not manufacture CPUs but licenses the architecture to other companies. These companies can then implement their own ARM-based processors, potentially customizing them for specific needs. This licensing model has led to wide adoption of ARM processors across various segments of the technology industry. ARM32 vs. ARM64 ARM architectures come in different versions, primarily distinguished by their instruction set architectures and addressing capabilities. The most commonly referenced are ARMv7 and ARMv8 (also called AArch64), which correspond to 32-bit and 64-bit processing capabilities, respectively. Newer hardware for industrial PCs and home computers incorporates ARMv8 (64-bit). It is the foundation for smartphones, tablets, servers, and processors like Apple's A-series chips in iPhones and iPads. Even the cloud providers use the ARM architecture to build new processors for cloud computing, like Amazon's Graviton. ARMv8 processors can run both 32-bit and 64-bit applications, offering greater versatility and performance. Key Features and Benefits of ARM CPUs The key features and benefits of ARM CPUs include: Power efficiency: Their design allows for significant power savings, extending battery life in portable devices.Performance: While historically seen as less powerful than their x86 counterparts, modern ARM processors offer competitive performance, especially in multi-core configurations.Customization: Companies can license the ARM architecture and customize their own chips, allowing for optimized processors that meet specific product requirements.Ecosystem: A broad adoption across mobile, embedded, and increasingly in server and desktop markets ensures a robust ecosystem of software and development tools. ARM CPUs are central to the development of mobile computing and are becoming more important in other areas, including edge computing, data centers, and as part of the shift towards more energy-efficient computing solutions. Why ARM CPUs at the Edge (e.g., for Industrial IoT)? ARM architecture is favored for edge computing, including Industrial IoT. It provides high power efficiency and performance within compact form factors. These characteristics ensure devices can handle compute-intensive tasks locally. Only relevant data is transmitted to the cloud, which saves bandwidth and decreases latency. The efficiency of ARM CPUs is crucial for industrial applications where real-time processing and long battery life are essential. ARM's versatility and low power consumption make it ideal for the diverse needs of edge computing in various environments. For instance, in manufacturing, ARM-powered sensors on machines enable predictive maintenance by monitoring conditions like vibration and temperature. These sensors process data locally, offering real-time alerts on potential failures, reducing downtime, and saving costs. ARM's efficiency supports widespread deployment, making it ideal for continuous, autonomous monitoring in industrial environments. Why ARM in the Cloud? ARM's efficiency and performance advantages are driving its adoption in cloud computing. ARM-based processors, like Amazon's AWS Graviton, offer an attractive mix of high performance and lower power consumption compared to traditional x86 CPUs. This efficiency translates into cost savings and reduced environmental impact for cloud service providers and their customers. AWS Graviton, specifically designed for cloud workloads, exemplifies how ARM architecture can optimize operations in data centers, enhancing the performance of web servers, containerized applications, and microservices at a lower cost. This shift towards ARM in the cloud represents a significant move towards more energy-efficient and cost-effective data center operations. Apache Kafka on ARM: A Match for Edge and Cloud Workloads Using ARM architecture together with Apache Kafka, a distributed streaming platform, offers several advantages, especially in scenarios that demand high throughput, scalability, and energy efficiency. 1. Energy Efficiency and Cost-Effectiveness ARM processors are known for their low power consumption, which makes them cost-effective for running distributed systems like Kafka. Deploying Kafka on ARM-based servers can reduce operational costs, particularly in large-scale environments where energy consumption can significantly affect the budget. 2. Scalability Kafka handles large volumes of data and high throughput, characteristics that align well with the scalability of ARM processors in cloud environments. ARM's efficiency enables scaling out Kafka clusters more economically, allowing for the processing of streaming data in real time without incurring high energy or hardware costs. 3. Edge Computing Kafka is a common choice for real-time data processing and aggregation in edge computing scenarios. ARM's dominance in IoT and edge devices makes it a natural fit for these use cases. Running Kafka on ARM enables efficient data processing closer to the source, reducing latency and bandwidth usage by minimizing the need to send large volumes of data to central data centers. 4. Eco-Friendly Solutions With growing environmental concerns, ARM's energy efficiency contributes to more sustainable computing solutions. Deploying Kafka on ARM can be part of an eco-friendly strategy for organizations looking to minimize their carbon footprint. 5. Innovative Use Cases Combining Kafka with ARM opens up new possibilities for innovative applications in IoT, real-time analytics, and mobile applications. The efficiency of ARM allows for cost-effective experimentation and deployment of new services that require real-time data processing and streaming capabilities. Examples and Case Studies for Kafka at the Edge Overall, the combination of ARM and Apache Kafka supports the development of efficient, scalable, and sustainable data processing architectures, particularly suited for modern applications that require real-time performance with minimal energy consumption. For several use cases, architectures, and case studies about data streaming at the edge and hybrid cloud, check out my related articles, like Use Cases and Architectures for Kafka at the Edge, Apache Kafka is the New Black at the Edge in Industrial IoT, Logistics and Retailing and Apache Kafka in Air-Gapped Zero-Trust Environments with Data Diode/Unidirectional Gateway. Most of these blog posts are a few years old. But they are as relevant today as at the time of writing them. Actually, the official support of ARM CPU at the edge completely changes the conversations about challenges and solutions of deploying Kafka on edge infrastructure. The deployment of Kafka at the edge was never easier. If you buy a new Industrial PC (IPC) today, it will have enough hardware power to run Kafka and its ecosystem for data integration and stream processing easily. Kafka + ARM = Cost-Effective and Sustainable The article outlined the synergistic relationship between Apache Kafka and ARM CPUs. It enables efficient, scalable, and sustainable data processing architectures for edge and hybrid cloud environments. The adoption of ARM in cloud computing marks a significant shift towards more sustainable and performance-optimized computing solutions. The combination of Kafka and ARM CPUs is poised to drive innovation in real-time analytics, IoT, and mobile applications. A few great examples: AWS Graviton to operate Kafka cost-efficiently in the public cloudConfluent Platform's compatibility and support for ARM64 architectures at the edge Do you already use ARM processors in your Edge or cloud Kafka environment? Let’s connect on LinkedIn and discuss it!
Software developers use real-time data transmission to ensure the security of IoT applications. The choice of protocol is influenced by the complexity of the application and priorities. For instance, developers might prioritize speed over power saving if the IoT application requires real-time data transmission. On the other hand, if the application deals with sensitive data, a developer might prioritize security over speed. Understanding these trade-offs is critical to making the right protocol choice and putting in control of the IoT development journey. As the Internet of Things (IoT) evolves, we witness the birth of the latest devices and use cases. This dynamic landscape gives rise to more specialized protocols and opens new possibilities and potential for innovation. Simultaneously, older, obsolete protocols are naturally phasing out, paving the way for more efficient and effective solutions. This is a time of immense potential and opportunity in the world of IoT. Let's dive deep into the depths of IoT protocols. How Many IoT Protocols Are There? The IoT protocols can be vastly classified into two separate categories. They are IoT Data protocols and IoT Network protocols. IoT Data Protocols Discover the essential role of IoT data protocols in connecting low-power IoT devices. These protocols facilitate communication with hardware on the user's end without reliance on an internet connection. IoT data protocols and standards are linked through wired or cellular networks, enabling seamless connectivity. Noteworthy examples of IoT data protocols are: 1. Extensible Messaging and Presence Protocol (XMPP) XMPP is a versatile data transfer protocol for instant messaging technologies like Messenger and Google Hangouts. It is widely used for machine-to-machine communication in IoT, providing reliable and secure communication between devices. XMPP can transfer unstructured and structured data, making it a safe and flexible communication solution. 2. MQTT (Message Queuing Telemetry Transport) MQTT is a protocol that enables seamless data flow between devices. Despite its widespread adoption, it has limitations, such as the need for defined data representation and device management structure and the absence of built-in security measures. Careful consideration is essential when selecting this protocol for your IoT project. 3. CoAP (Constrained Application Protocol) CoAP is designed explicitly for HTTP-based IoT systems. It offers low overhead, ease of use, and multicast support, making it ideal for devices with resource constraints, such as IoT microcontrollers or WSN nodes. Its applications include intelligent energy and building automation for IoT innovation. 4. AMQP (Advanced Message Queuing Protocol) The Advanced Message Queuing Protocol (AMQP) sends transactional messages between servers. It provides high security and reliability, making it common in server-based analytical environments, especially in banking. However, its heaviness limits its use in IoT devices with limited memory. 5. DDS (Data Distribution Service) DDS (Data Distribution Service) is a scalable IoT protocol that enables high-quality communication in IoT. Similar to MQTT, DDS works on a publisher-subscriber model. It can be deployed in various settings, making it perfect for real-time and embedded systems. DDS allows for interoperable data exchange independent of hardware and software, positioning it as an open international middleware IoT standard. 6. HTTP (Hyper Text Transfer Protocol) The HTTP (Hyper Text Transfer Protocol) differs from the preferred IoT standard due to cost, battery life, power consumption, and weight issues. However, it is still used in manufacturing and 3-D printing industries due to its ability to handle large amounts of data and enable PC connection to 3-D printers for printing three-dimensional objects. 7. WebSocket WebSocket, developed as part of HTML5 in 2011, enables message exchange between clients and servers through a single TCP connection. Like CoAP, it simplifies managing connections and bidirectional communication on the Internet. It is widely used in IoT networks for continuous data communication across devices in client or server environments. IoT Network Protocols Now that we've covered IoT data protocols, let's explore the different IoT network protocols. IoT network protocols facilitate the connection of devices over a network, usually the Internet. Noteworthy examples of IoT network protocols are: 1. Lightweight M2M (LWM2M) IoT devices and sensors require minimal power, necessitating lightweight and energy-efficient communication. Gathering meteorological data often demands numerous sensors. To minimize energy consumption, experts employ lightweight communication protocols. One such protocol is the Lightweight M2M (LWM2M), enabling efficient remote connectivity. 2. Cellular Cellular networks like 4G and 5G are used to connect IoT devices, offering low latency and high data transfer speeds. However, they require a SIM card, which can be costly for many devices across a wide area. 3. Wi-Fi Wi-Fi is a widely known IoT protocol that provides internet connectivity within a specific range. It uses radio waves on particular frequencies, such as 2.4 GHz or 5GHz channels. These frequencies offer multiple channels for various devices, preventing network congestion. Typically, Wi-Fi connections range from 10 to 100 meters, with their range and speed influenced by the environment and coverage type. 4. Bluetooth The latest Bluetooth 4.0 standard uses 40 channels and 2 MHz bandwidth, enabling a maximum Mbps data transfer rate. Bluetooth Low Energy (BLE) technology is ideal for IoT applications prioritizing flexibility, scalability, and low power consumption. 5. ZigBee ZigBee-based networks, like Bluetooth, boast a significant IoT user base. ZigBee offers lower power consumption, more extended range (up to 200 meters compared to Bluetooth's 100 meters), low data range, and high security. Its simplicity and ability to scale to thousands of nodes make it an ideal choice for small devices. Many suppliers offer devices that support ZigBee's open standard, self-assembly, and self-healing grid topology model. 6. Thread The thread protocol is based on Zigbee. It provides efficient internet access to low-powered devices within a small area and offers the stability of Zigbee and Wi-Fi with superior power efficiency. In a Thread network, self-healing capabilities enable specific devices to seamlessly take over the role of a failing router. 7. Z-Wave Z-Wave is a popular IoT protocol for home applications. This protocol functions on the 800 to 900MHz radio frequency and rarely suffers from interference. However, device frequency is location-dependent, so choose the right one for your country. It is best used for home applications rather than in business. 8. LoRaWAN (Long Range WAN) LoRaWAN is an IoT protocol that enables low-power devices to talk with internet-connected services over a long-range wireless network. It can be mapped to the 2nd and 3rd layers of the OSI (Open Systems Interconnection) model. Conclusion Each IoT communication protocol is distinct, with a specific set of parameters that can either lead to success in one application or render it completely ineffective in another. Choosing IoT protocols and standards for Software Development projects is an essential and significant decision. Software developers must understand the gravity of this decision and determine the proper protocol for their IoT application. As the IoT industry continues to evolve, it brings about revolutionary changes in device communication, further underscoring the importance of IoT protocols. In this dynamic landscape, organizations are continually challenged to select the most suitable IoT protocol for their projects.
With video surveillance increasingly becoming a top application of smart technology, video streaming protocols are getting a lot more attention. We’ve recently spent a lot of time on our blog posts discussing real-time communication, both to and from video devices, and that has finally led to an examination of the Real-Time Streaming Protocol (RTSP) and its place in the Internet of Things (IoT). What Is the Real-Time Streaming Protocol? The Real-Time Streaming Protocol is a network control convention that’s designed for use in entertainment and communications systems to establish and control media streaming sessions. RTSP is how you will play, record, and pause media in real time. Basically, it acts like the digital form of the remote control you use on your TV at home. We can trace the origins of RTSP back to 1996 when a collaborative effort between RealNetworks, Netscape, and Columbia University developed it with the intent to create a standardized protocol for controlling streaming media over the Internet. These groups designed the protocol to be compatible with existing network protocols, such as HTTP, but with a focus specifically on the control aspects of streaming media, which HTTP did not adequately address at the time. The Internet Engineering Task Force (IETF) officially published RTSP in April of 1998. Since the inception of RTSP, IoT developers have used it for various applications, including for streaming media over the Internet, in IP surveillance cameras, and in any other systems that require real-time delivery of streaming content. It’s important to note that RTSP does not actually transport the streaming data itself; rather, it controls the connection and the streaming, often working in conjunction with other protocols like the Real-time Transport Protocol (RTP) for the transport of the actual media data. RTSP works on a client-server architecture, in which a software or media player – called the client – sends requests to a second party, i.e., the server. In an IoT interaction, the way this works is typically that the client software is on your smartphone or your computer and you are sending commands to a smart video camera or other smart device that acts as the server. The server will respond to requests by performing a specific action, like playing or pausing a media stream or starting a recording. And you’ll be able to choose what the device does in real-time. Understanding RTSP Requests So, the client in an RTSP connection sends requests. But what exactly does that mean? Basically, the setup process for streaming via RTSP involves a media player or feed monitoring platform on your computer or smartphone sending a request to the camera’s URL to establish a connection. This is done using the “SETUP” command for setting up the streaming session and the “PLAY” command to start the stream. The camera then responds by providing session details so the RTP protocol can send the media data, including details about which transport protocol it will use. Once the camera receives the “PLAY” command through RTSP, it begins to stream packets of video data in real-time via RTP, possibly through a TCP tunnel (more on this later). The media player or monitoring software then receives and decodes these video data packets into viewable video. Here’s a more thorough list of additional requests and their meanings in RTSP: OPTIONS: Queries the server for the supported commands. It’s used to request the available options or capabilities of a server.DESCRIBE: Requests a description of a media resource, typically in SDP (Session Description Protocol) format, which includes details about the media content, codecs, and transport information.SETUP: Initializes the session and establishes a media transport, specifying how the media streams should be sent. This command also prepares the server for streaming by allocating necessary resources.PLAY: Starts the streaming of the media. It tells the server to start sending data over the transport protocol defined in the SETUP command.PAUSE: Temporarily halts the stream without tearing down the session, allowing it to be resumed later with another PLAY command.TEARDOWN: Ends the session and stops the media stream, freeing up the server resources. This command effectively closes the connection.GET_PARAMETER: Used to query the current state or value of a parameter on the session or media stream.SET_PARAMETER: Allows the client to change or set the value of a parameter on the session or media stream. Once a request goes through, the server can offer a response. For example, a “200 OK” response indicates a successful completion of the request, while “401 Unauthorized” indicates that the server needs more authentication. And “404 Not Found” means the specified resource does not exist. If that looks familiar, it’s because you’ve probably seen 404 errors and a message like “Web page not found” at least once in the course of navigating the internet. The Real-Time Transport Protocol As I said earlier, RTSP doesn’t directly transmit the video stream. Instead, developers use the protocol in conjunction with a transport protocol. The most common is the Real-time Transport Protocol (RTP). RTP delivers audio and video over networks from the server to the client so you can, for example, view the feed from a surveillance camera on your phone. The protocol is widely used in streaming media systems and video conferencing to transmit real-time data, such as audio, video, or simulation data. Some of the key characteristics of RTP include: Payload type identification: RTP headers include a payload type field, which allows receivers to interpret the format of the data, such as the codec being used.Sequence numbering: Each RTP data packet is assigned a sequence number. This helps the receiver detect data loss and reorder packets that arrive out of sequence.Timestamping: RTP packets carry timestamp information to enable the receiver to reconstruct the timing of the media stream, maintaining the correct pacing of audio and video playback. RTP and RTSP are still not enough on their own to handle all the various tasks involved in streaming video data. Typically, a streaming session will also involve the Real-time Transport Control Protocol (RTCP), which provides feedback on the quality of the data distribution, including statistics and information about participants in the streaming session. Finally, RTP itself does not provide any mechanism for ensuring timely delivery or protecting against data loss; instead, it relies on underlying network protocols such as the User Datagram Protocol (UDP) or Transport Control Protocol (TCP) to handle data transmission. To put it all together, RTP puts data in packets and transports it via UDP or TCP, while RTCP helps with quality control and RTSP only comes in to set up the stream and act like a remote control. RTSP via TCP Tunneling While I said you can use both UDP and TCP to deliver a media stream, I usually recommend RTSP over TCP, specifically using TCP tunneling. Basically, TCP tunneling makes it easier for RTSP commands to get through network firewalls and Network Address Translation (NAT) systems. The reason this is necessary is because RTSP in its out-of-box version has certain deficiencies when it comes to authentication and privacy. Basically, its features were not built for the internet of today which is blocked by firewalls on all sides. Rather than being made for devices on local home networks behind NAT systems, RTSP was originally designed more for streaming data from central services. For that reason, it struggles to get through firewalls or locate and access cameras behind those firewalls, which limits its possible applications. However, using TCP tunneling allows RTSP to get through firewalls and enables easy NAT traversal while maintaining strong authentication. It allows you to use an existing protocol and just “package” it in TCP for enhanced functionality. The tunnel can wrap RTSP communication inside a NAT traversal layer to get through the firewall. This is important because it can be difficult to set up a media stream between devices that are on different networks: for example, if you’re trying to monitor your home surveillance system while you’re on vacation. Another benefit of TCP tunneling is enhanced security. Whereas RTSP and RTP don’t have the out-of-box security features of some other protocols, like WebRTC, you can fully encrypt all data that goes through the TCP tunnel. These important factors have made RTSP via TCP tunneling a top option for video streaming within IoT. Final Thoughts In summary, while RTSP provides a standardized way to control media streaming sessions, its inherent limitations make it challenging for modern IoT video use cases requiring remote access and robust security. However, by leveraging TCP tunneling techniques, developers can harness the benefits of RTSP while overcoming firewall traversal and encryption hurdles. As video streaming continues to drive IoT innovation, solutions like RTSP over TCP tunneling will be crucial for enabling secure, real-time connectivity across distributed devices and networks. With the right protocols and services in place, IoT developers can seamlessly integrate live video capabilities into their products.
Node-RED is an open-source, flow-based development tool designed for programming Internet of Things (IoT) applications with ease, and is a part of the OpenJS Foundation. It provides a browser-based editor where users can wire together devices, APIs, and online services by dragging and dropping nodes into a flow. This visual approach to programming makes it accessible for users of all skill levels to create complex applications by connecting different elements without writing extensive code. Node-RED has been working on some great improvements lately, including the first beta release of Node-RED 4.0. Updates include auto-complete in flow/global/env inputs, timestamp formatting options, and better, faster, more compliant CSV node. More to come in the full release next month! Recently, the OpenJS Foundation talked with Kazuhito Yokoi (横井 一仁), Learning and Development Division, Hitachi Academy, to find out more about Node-RED and why it is becoming so popular in Industrial IoT applications. A browser-based low-code programming tool sounds great, but how often do users end up having to write code anyway? It depends on user skills and systems. If users such as factory engineers have no IT skills, they can create flow without coding. The two most common cases are data visualization and sending data to a cloud environment. In these cases, users can create their systems by connecting Node-RED nodes. If users have IT skills, they can more easily customize Node-RED flow. They need to know about SQL when they want to store sensor data. If they want external npm modules, they should understand how to call the function through JavaScript coding, but in both cases, the programming code of a Node-RED node is usually on a computer screen. Hitachi is using Generative AI based on a Hitachi LLM to support the use of low-code development. Do you personally use ChatGPT with Node-RED? Do you think it will increase efficiency in creating low-code Node-RED flows? Yes, I do use ChatGPT with Node-RED. Recently, I used ChatGPT to generate code to calculate location data. Calculating direction and distance from two points, including latitude and longitude, is difficult because it requires trigonometric functions. But ChatGPT can automatically generate the source code from the prompt text. In particular, the function-gpt node, developed by FlowFuse, can generate JavaScript code in the Node-RED-specific format within a few seconds. Users just type the prompt text on the Node-RED screen. It’s clear to me that using ChatGPT with Node-RED allows IT engineers to reduce their coding time, and it expands the capabilities of factory engineers because they can try to write code themselves. In addition to factory applications, there's a compelling use case in Japan that underscores the versatility of Node-RED, especially for individuals without an IT skill set. In Tokyo, the Tokyo Mystery Circus, an amusement building, utilizes Node-RED to control its displays and manage complex interactions. The developer behind this project lacked a traditional IT background but needed a way to handle sophisticated tasks, such as controlling various displays that display writing as part of the gameplay. By using Node-RED, along with ChatGPT for creating complex handling scripts, the developer was able to achieve this. Using these technologies in such a unique environment illustrates how accessible and powerful tools like Node-RED and ChatGPT can be for non-traditional programmers. This example, highlighted in Tokyo and extending to cities like Osaka and Nagoya, showcases the practical application of these technologies in a wide range of settings beyond traditional IT and engineering domains. For more details, the video below (in Japanese) provides insight into how Tokyo Mystery Circus uses Node-RED in its operations. Why is Node-RED popular for building Industrial IoT applications? Node-RED was developed in early 2013 as a side-project by Nick O'Leary and Dave Conway-Jones of IBM's Emerging Technology Services group and is particularly well-known for its support of IoT protocols like MQTT and HTTP. Because Node-RED has many functions in MQTT, it is ready for use in Industrial IoT. From MQTT, other protocols like OPC UA (cross-platform, open-source, IEC62541 standard for data exchange from sensors to cloud applications) and Modbus (client/server data communications protocol in the application layer) can be used in 3rd party nodes developed by the community. Because Node-RED can connect many types of devices, it is very popular in the Industrial IoT field. In addition, many industrial devices support Node-RED. Users can buy these devices and start using Node-RED quickly. Why have companies like Microsoft, Hitachi, Siemens, AWS, and others adopted Node-RED? Regarding Hitachi, Node-RED has emerged as a crucial communication tool bridging the gap between IT and factory engineers, effectively addressing the barriers that exist both in technology and interpersonal interactions. Within one company, IT and OT (Operational Technology) departments often operate like two distinct entities, which makes it challenging to communicate despite the critical importance of collaboration. To overcome this, Hitachi decided to adopt Node-RED as a primary communication tool in programming. Node-RED’s intuitive interface allows for the entire flow to be visible on the screen, facilitating discussions and collaborative efforts seamlessly. This approach was put into practice recently when I, as the only IT Engineer, visited a Hitachi factory. Initially, typing software code on my own, the factory engineers couldn't grasp the intricacies of the work. However, after developing a Node-RED flow, it became a focal point of interest, enabling other engineers to gather around and engage with the project actively. This shift towards a more inclusive and comprehensible method of collaboration underscores the value of Node-RED in demystifying IT for non-specialists. I believe Siemens operates under a similar paradigm, utilizing Node-RED to enhance communication between its IT and engineering departments. Moreover, major companies like Microsoft and AWS are also recognizing the potential of Node-RED. By integrating it within their IT environments, they aim to promote their cloud services more effectively. This wide adoption of Node-RED across different sectors, from industrial giants to cloud service providers, highlights its versatility and effectiveness as a tool for fostering understanding and cooperation across diverse technological landscapes. How important is Node-RED in the MING (MQTT, InfluxDB, Node-RED, Grafana) stack? Node-RED is an essential tool in the MING stack because it is a central component that facilitates the connection to other software. The MING stack is designed to facilitate data collection, storage, processing, and visualization, and it brings together the key open-source components of an IoT system. Its importance cannot be overstated as it connects various software components and represents the easiest way to store and manage data. This functionality underscores its crucial role in the integration and efficiency of the stack, highlighting its indispensability in achieving streamlined data processing and application development. Node-RED has introduced advanced features like Git Integration, Flow Debugger, and Flow Linter. What's next for improving the developer experience with Node-RED? The main focus of Node-RED development at the moment is to improve the collaboration tooling - working towards concurrent editing to make it easier for multiple users to work together. Another next step for the community is building a flow testing tool. Flow testing is needed to ensure stability. There's a request from the community for flow testing capabilities for Node-RED flows. In response, the Node-RED team, with significant contributions from Nick O'Leary (CTO and Founder, FlowFuse, and Node-RED Project Lead), is developing a flow testing tool, primarily as a plugin. A design document for this first implementation called node-red-flow-tester is available, allowing users to post issues and contribute feedback, which has been very useful. The tool aims to leverage REST API test frameworks for testing, although it's noted that some components cannot be tested in detail. If made available, this tool would simplify the process of upgrading Node-RED and its JavaScript version, ensuring compatibility with dependency modules.Simultaneously, my focus has been on documentation and organizing hands-on events related to advanced features such as Git integration. These features are vital, as, without them, users might face challenges in their development projects. On Medium, under the username kazuhitoyokoi, I have published 6 articles that delve into these advanced features. One article specifically focuses on Git integration and is also available in Japanese, indicating the effort to cater to a broader audience. Furthermore, I have been active on Qiita, a popular Japanese technical knowledge-sharing platform, where I organized the first hands-on event. The first event full video is available here. (In Japanese) The second event was held on March 18, 2024, and a third event is scheduled for April 26, 2024, showcasing the community's growing interest in these topics and the practical application of Node-RED in development projects. This multifaceted approach, combining tool development, documentation, and community engagement, aims to enhance the Node-RED ecosystem, making it more accessible and user-friendly for developers around the world. Contributions to the Node-RED community include source code, internationalization of the flow editor, bug reports, feature suggestions, participating in developer meetings, and more. What is the best way to get started contributing to Node-RED? If you are not a native English speaker, I recommend translating the Node-RED flow editor as a great way to start contributing. Currently, users can contribute to the Node-RED project by creating a JSON file that contains local language messages. If the user finds a bug, try inspecting the code. The Node-RED source code is very easy to understand. After trying the fix, the user can make a pull request. Conclusion The interview shows that Node-RED is an essential tool to improve collaboration between different professionals without technical barriers in the development of Industrial IoT applications. Discover the potential of Node-RED for your projects and contribute to the Node-RED project. The future of Node-RED is in our hands! Resources Node-Red main siteTo get an invite to the Node-RED Slack
Recently, I mentioned how I refactored the script that kept my GitHub profile up-to-date. Since Geecon Prague, I'm also a happy owner of a Raspberry Pi: Though the current setup works flawlessly — and is free, I wanted to experiment with self-hosted runners. Here are my findings. Context GitHub offers a large free usage of GitHub Actions: GitHub Actions usage is free for standard GitHub-hosted runners in public repositories, and for self-hosted runners. For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the account's plan. Any usage beyond the included amounts is controlled by spending limits. — About billing for GitHub Actions Yet, the policy can easily change tomorrow. Free tier policies show a regular trend of shrinking down when: A large enough share of users use the product, lock-inShareholders want more revenueA new finance manager decides to cut costsThe global economy shrinks downA combination of the above Forewarned is forearmed. I like to try options before I need to choose one. Case in point: what if I need to migrate? The Theory GitHub Actions comprise two components: The GitHub Actions infrastructure itself.It hosts the scheduler of jobs.Runners, who run the jobs By default, jobs run on GitHub's runners. However, it's possible to configure one's job to run on other runners, whether on-premise or in the Cloud: these are called self-hosted runners. The documentation regarding how to create self-hosted runners gives all the necessary information to build one, so I won't paraphrase it. I noticed two non-trivial issues, though. First, if you have jobs in different repositories, you need to set up a job for each repository. Runner groups are only available for organization repositories. Since most of my repos depend on my regular account, I can't use groups. Hence, you must duplicate each repository's package on the runner's Pi. In addition, there's no dedicated package: you must untar an archive. This means there's no way to upgrade the runner version easily. That being said, I expected the migration to be one line long: YAML jobs: update: #runs-on: ubuntu-latest runs-on: self-hosted It's a bit more involved, though. Let's detail what steps I had to undertake in my repo to make the job work. The Practice GitHub Actions depend on Docker being installed on the runner. Because of this, I thought jobs ran in a dedicated image: it's plain wrong. Whatever you script in your job happens on the running system. Case in point, the initial script installed Python and Poetry. YAML jobs: update: runs-on: ubuntu-latest steps: - name: Set up Python 3.x uses: actions/setup-python@v5 with: python-version: 3.12 - name: Set up Poetry uses: abatilo/actions-poetry@v2 with: poetry-version: 1.7.1 In the context of a temporary container created during each run, it makes sense; in the context of a stable, long-running system, it doesn't. Raspbian, the Raspberry default operating system, already has Python 3.11 installed. Hence, I had to downgrade the version configured in Poetry. It's no big deal because I don't use any specific Python 3.12 feature. TOML [tool.poetry.dependencies] python = "^3.11" Raspbian forbids the installation of any Python dependency in the primary environment, which is a very sane default. To install Poetry, I used the regular APT package manager: Shell sudo apt-get install python-poetry The next was to handle secrets. On GitHub, you set the secrets on the GUI and reference them in your scripts via environment variables: YAML jobs: update: runs-on: ubuntu-latest steps: - name: Update README run: poetry run python src/main.py --live env: BLOG_REPO_TOKEN: ${{ secrets.BLOG_REPO_TOKEN } YOUTUBE_API_KEY: ${{ secrets.YOUTUBE_API_KEY } It allows segregating individual steps so that a step has access to only the environmental variables it needs. For self-hosted runners, you set environment variables in an existing .env file inside the folder. YAML jobs: update: runs-on: ubuntu-latest steps: - name: Update README run: poetry run python src/main.py --live If you want more secure setups, you're on your own. Finally, the architecture is a pull-based model. The runner constantly checks if a job is scheduled. To make the runner a service, we need to use out-of-the-box scripts inside the runner folder: Shell sudo ./svc.sh install sudo ./svc.sh start The script uses systemd underneath. Conclusion Migrating from a GitHub runner to a self-hosted runner is not a big deal but requires changing some bits and pieces. Most importantly, you need to understand the script runs on the machine. This means you need to automate the provisioning of a new machine in the case of crashes. I'm considering the benefits of running the runner inside a container on the Pi to roll back to my previous steps. I'd be happy to hear if you found and used such a solution. In any case, I'm not migrating any more jobs to self-hosted for now. To Go Further About billing for GitHub ActionsAbout self-hosted runnersConfiguring the self-hosted runner application as a service
Tim Spann
Principal Developer Advocate,
Zilliz
Alejandro Duarte
Developer Advocate,
MariaDB plc
Kai Wähner
Technology Evangelist,
Confluent