DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

IoT

IoT, or the Internet of Things, is a technological field that makes it possible for users to connect devices and systems and exchange data over the internet. Through DZone's IoT resources, you'll learn about smart devices, sensors, networks, edge computing, and many other technologies — including those that are now part of the average person's daily life.

icon
Latest Refcards and Trend Reports
Trend Report
Edge Computing and IoT
Edge Computing and IoT
Refcard #214
MQTT Essentials
MQTT Essentials
Refcard #263
Messaging and Data Infrastructure for IoT
Messaging and Data Infrastructure for IoT

DZone's Featured IoT Resources

Data Management for Industrial IoT
Refcard #367

Data Management for Industrial IoT

Mobile Database Essentials
Refcard #386

Mobile Database Essentials

Get Started On Celo With Infura RPC Endpoints
Get Started On Celo With Infura RPC Endpoints
By Paul McAviney CORE
ANEW Can Help With Road Safety
ANEW Can Help With Road Safety
By Rajesh Gaddipati
A Cloud-Native SCADA System for Industrial IoT Built With Apache Kafka
A Cloud-Native SCADA System for Industrial IoT Built With Apache Kafka
By Kai Wähner CORE
Can Artificial Intelligence Provide Value in IoT Applications?
Can Artificial Intelligence Provide Value in IoT Applications?

If you are involved in the field of IoT technology, then it is essential to understand the importance and benefits of AI. In this section, I will discuss all the aspects related to AI so that you can get a clear picture of this topic. Today, IoT applications are in visual recognition, predicting a future event, and identifying an object. You might wonder, “what's so different about IoT applications?” They are used for many purposes, like home automation, healthcare, and manufacturing. They can also be used in smart cities. AI Algorithms Allow the System to Evaluate, Learn and Act Independently The AI algorithm allows the system to evaluate, learn and act independently. It can also be used to create a virtual brain or mind. The technology is designed in such a way that it learns from experience as well as having an innate ability to learn new things on its own. This means that if you want your device or system to learn certain skills, you need some data fed into it by yourself or someone else (e.g., an employee). Machine Learning Is Another Branch of AI Machine learning is another branch of AI. It allows the program to analyze huge data sets and make decisions on its own when required. Machine learning can be used for a variety of purposes, such as image classification, speech recognition, or recommendation engines. Machine learning uses data to learn patterns in order to automate processes that would otherwise require human intervention. For example, it might be used by an autonomous vehicle (AV) to recognize traffic signs and road conditions at night time so that it knows how fast it should drive on a particular road based on its surroundings rather than relying solely on instructions provided by its designers or other people who are familiar with these roads. Deep Learning Is the Best Example of Machine Learning Deep learning is a type of machine learning that uses artificial neural networks (ANNs) to perform pattern recognition and classification tasks. It relies on many layers of ANNs, where each layer has multiple neurons and learns from past experiences. The human brain is an example of a deep learning system, as it can perceive and process information in many different ways. This ability allows us to understand language, recognize faces, read books and make decisions based on our experiences or knowledge retrieved from previous situations. AI Requires a Significant Amount of Data AI technology requires a significant amount of data, and manufacturers can use data collected by IoT devices. The more data that is available to train an AI model, the better it will perform. For example, if you have an IoT device that monitors the temperature in your home and sends you alerts when it detects changes outside of normal parameters (such as a drop of two degrees), then you may be able to train a predictive model using this information and other factors such as weather patterns or historical patterns in order for your device to predict whether there will be another cold snap coming up soon. This type of analysis can help reduce costs associated with maintaining equipment such as heating systems or air conditioners because these systems are designed specifically for hot/cold temperatures based on their location; however, if they weren't regularly monitored throughout their lifetimes, they would run less efficiently over time due to wear-and-tear caused by cycles between heating/cooling cycles (and especially during winter months). IoT and AI Can Be Used to Give Instructions to Machines at Home or Work Without Speaking or Typing Anything As you can see from the above examples, AI and IoT are not just two technologies working together. They actually complement each other in some areas, making it possible for people to give instructions to machines at home or work without speaking or typing anything. In addition to this, they also have other benefits: Using AI in IoT applications allows us to create systems that can learn from their environment and adapt accordingly; this makes them more efficient than traditional approaches, which focus on predefined rules (e.g., "if these conditions are met, then do this"). For example, an autonomous vehicle might be able to identify traffic patterns better than a human driver could because it has access to all kinds of data about road conditions, including weather forecasts. So if there is a heavy rain forecast later today, the car would know not only how much time is left before sunset but also whether there'll still be enough light left after dark when driving around town looking for parking spots! We Have Come to the End of This Blog Where I have discussed all essential aspects concerning the use of AI for IoT applications. AI is a branch of computer science that deals with the design and development of intelligent agents, software that can sense its environment and take actions that maximize its chance of success at some goal. It has been applied to engineering, philosophy, law, biology, and economics for over 50 years. The first artificial intelligence (AI) system was created in 1956 by John McCarthy, who developed a test for machine learning called "the game of checkers," which would play against itself until it could beat its opponent in a fair way using only logical rules; this was done using two computers linked together via phone lines — later systems used dedicated hardware instead but were still limited by speed limitations from those original designs (they could only process one game state at once). Ultimately, AI is one of the most promising technologies and will play an important role in making IoT work smarter. The use of AI can help us solve problems related to data collection, analysis, and decision-making.

By Riley Adams
Top 5 Internet of Things (IoT) Trends to Expect in 2023
Top 5 Internet of Things (IoT) Trends to Expect in 2023

Computers and smartphones were the first devices connected to the internet. In the previous ten years, the human lifestyle has evolved with the introduction of intelligent TVs, electric kettles, and smart fridges. Moreover, people have been using intelligent alarms, cameras, and light bulbs. In the industrial space, employees have become habitual in working in a smart machinery environment, such as using robots. According to McKinsey and Company, more than 43 billion devices will be linked to the internet in 2023. This will generate, collect, and help people to utilize data in various ways. Key Market Insights As per Markets and Markets Research, the global IoT market size will reach 650.5 billion dollars by 2026, showing a CAGR of 16.7% from 2021-2026. According to the report, essential market drivers of the industry are below: Access to Low-Cost and Power Sensor Technology When it comes to IoT devices, sensory instruments play an essential role. Sensor technologies can generate data about any physical event, such as orientation, motion, light, humidity, and temperature. It can even monitor biometric elements, e.g., blood pressure and heart rate. Innovation in sensor technologies will expand IoT capabilities even more. In the past decade, the cost of sensory technologies was high, resulting in limited adoption in the industrial sector. However, with time, the decline in prices has increased adopted rates across modern-day organizations. For instance, the cost of low-frequency passive categories of Radio Frequency Identification (RFID) tags and sensors has reduced in the past decade. Moreover, the average cost of sensor technologies has decreased from 1.30 dollars per unit to 0.38 dollars per unit. Growth in the IoT market has ensured the widespread deployment of low-cost devices, contributing to technological advancement. Top 5 Internet of Things Trends to Keep a Tab On The Internet of Things (IoT) is a system of connected devices, digital machines, and users with unique identifiers and transferability over a network without human-to-human or human-to-machine interaction. Following are the five paramount trends of IoT that will transform the world in 2023. 1. Building Realistic Digital Twins and Enterprise Metaverse This is a merger of two significant tech trends that will dictate the application of innovative technology across various industries during 2023. For modern-day businesses, the metaverse will play an important role in bridging the gap between the virtual and real worlds. With IoT sensors, the creation of realistic digital twins will become easier. Corporate professionals can use Virtual Reality (VR) headsets to step inside the digital twins and understand their functioning to influence business outcomes. 2. Discouraging Fraud Through Enhanced IoT Security IoT devices improve the lifestyle of users, but the loopholes in the network attract cybercriminals for exploitation. In other words, more connected devices mean more opportunities for fraudsters to accomplish their malicious goals. With an increase in the number of devices during 2023, manufacturers and security experts will gear up to combat fraud attempts from bad actors. This way, professionals can ensure unbeatable security around the sensitive data of individuals. In the USA, the White House National Security Council has declared that it will establish standardized security labeling for consumer IoT device manufacturers in the first quarter of 2023. In this case, users can quickly identify the risks linked with IoT systems. In addition, the United Kingdom (UK) will introduce its Product Security and Telecommunication Infrastructure (PTSI) bill to address security issues in IoT systems. 3. Utilizing the Internet of Healthcare Things The healthcare sector presents a huge growth opportunity for IoT technology as the financial worth of internet of things based health devices will reach around $267 billion by 2023. A massive game changer is the use of wearables and in-home sensors to empower healthcare professionals to monitor patients. This not only provides 24/7 medical care but also frees up valuable resources for emergency care. In 2023, more patients will become familiar with virtual hospital wards, where sensors and telemedicine approaches will help professionals deal with patients. The use of identity verification services can also help the healthcare sector discourage bad actors from exploiting the system. 4. Gaining Insight Into Governance and Regulations in the IoT Space During 2023, the European Union (EU) will introduce legislation that will require manufacturers and vendors of smart devices to follow stringent regulations. This applies to customer data collection and storage and what should be done to ensure data privacy. In Asia, 2023 brings a three-year plan by the Chinese government to introduce policies for the mass adoption of IoT technology. In China and elsewhere in the world, IoT can drive massive growth in the corporate sector. First, however, experts should create a plan to circumvent problems with privacy and personal rights. 5. Using IoT and Cloud Computing The combination of cloud computing and IoT can increase data storage, improve data processing and ensure greater business scalability. This also reduces infrastructure costs and enhances security. IoT and cloud computing also facilitates business experts to make real-time decisions and accomplish goals by automating recurring tasks. According to the Research and Markets report, the global cloud computing in industrial IoT market size will reach a financial worth of around $8.159 billion by 2026, showing a CAGR of 10.98%. The Bottom Line Several IoT services have entered the market over the past five years. With time, companies realize the potential of IoT systems to enhance security, automate mundane tasks and streamline data processing. Over the next ten years, the IoT market size will keep growing exponentially. Hence, the internet of things will be a potent force behind the transformation of human society.

By Emily Daniel
How to Use MQTT in Java
How to Use MQTT in Java

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth. MQTT today is used in a wide variety of industries, such as automotive, manufacturing, telecommunications, oil and gas, etc. This article introduces how to use MQTT in the Java project to realize the functions of connecting, subscribing, unsubscribing, publishing, and receiving messages between the client and the broker. Add Dependency The development environment for this article is: Build tool: Maven IDE: IntelliJ IDEA Java: JDK 1.8.0 We will use Eclipse Paho Java Client as the client, which is the most widely used MQTT client library in the Java language. Add the following dependencies to the pom.xml file. <dependencies> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>org.eclipse.paho.client.mqttv3</artifactId> <version>1.2.5</version> </dependency> </dependencies> Create an MQTT Connection MQTT Broker This article will use the public MQTT broker created based on EMQX Cloud. The server access information is as follows: Broker: broker.emqx.io TCP Port: 1883 SSL/TLS Port: 8883 Connect Set the basic connection parameters of MQTT. Username and password are optional. String broker = "tcp://broker.emqx.io:1883"; // TLS/SSL // String broker = "ssl://broker.emqx.io:8883"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; Then create an MQTT client and connect to the broker. MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); client.connect(options); Instructions: MqttClient: MqttClient provides a set of methods that block and return control to the application program once the MQTT action has been completed. MqttClientPersistence: Represents a persistent data store used to store outbound and inbound messages while they are in flight, enabling delivery to the QoS specified. MqttConnectOptions: Holds the set of options that control how the client connects to a server. Here are some common methods: setUserName: Sets the user name to use for the connection. setPassword: Sets the password to use for the connection. setCleanSession: Sets whether the client and server should remember the state across restarts and reconnects. setKeepAliveInterval: Sets the "keep alive" interval. setConnectionTimeout: Sets the connection timeout value. setAutomaticReconnect: Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. Connecting With TLS/SSL If you want to use a self-signed certificate for TLS/SSL connections, add bcpkix-jdk15on to the pom.xml file. <!-- https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on --> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk15on</artifactId> <version>1.70</version> </dependency> Then create the SSLUtils.java file with the following code. package io.emqx.mqtt; import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.bouncycastle.openssl.PEMKeyPair; import org.bouncycastle.openssl.PEMParser; import org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import java.io.BufferedInputStream; import java.io.FileInputStream; import java.io.FileReader; import java.security.KeyPair; import java.security.KeyStore; import java.security.Security; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; public class SSLUtils { public static SSLSocketFactory getSocketFactory(final String caCrtFile, final String crtFile, final String keyFile, final String password) throws Exception { Security.addProvider(new BouncyCastleProvider()); // load CA certificate X509Certificate caCert = null; FileInputStream fis = new FileInputStream(caCrtFile); BufferedInputStream bis = new BufferedInputStream(fis); CertificateFactory cf = CertificateFactory.getInstance("X.509"); while (bis.available() > 0) { caCert = (X509Certificate) cf.generateCertificate(bis); } // load client certificate bis = new BufferedInputStream(new FileInputStream(crtFile)); X509Certificate cert = null; while (bis.available() > 0) { cert = (X509Certificate) cf.generateCertificate(bis); } // load client private key PEMParser pemParser = new PEMParser(new FileReader(keyFile)); Object object = pemParser.readObject(); JcaPEMKeyConverter converter = new JcaPEMKeyConverter().setProvider("BC"); KeyPair key = converter.getKeyPair((PEMKeyPair) object); pemParser.close(); // CA certificate is used to authenticate server KeyStore caKs = KeyStore.getInstance(KeyStore.getDefaultType()); caKs.load(null, null); caKs.setCertificateEntry("ca-certificate", caCert); TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509"); tmf.init(caKs); // client key and certificates are sent to server so it can authenticate KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); ks.load(null, null); ks.setCertificateEntry("certificate", cert); ks.setKeyEntry("private-key", key.getPrivate(), password.toCharArray(), new java.security.cert.Certificate[]{cert}); KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory .getDefaultAlgorithm()); kmf.init(ks, password.toCharArray()); // finally, create SSL socket factory SSLContext context = SSLContext.getInstance("TLSv1.2"); context.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null); return context.getSocketFactory(); } } Set options as follows. String broker = "ssl://broker.emqx.io:8883"; // Set socket factory String caFilePath = "/cacert.pem"; String clientCrtFilePath = "/client.pem"; String clientKeyFilePath = "/client.key"; SSLSocketFactory socketFactory = getSocketFactory(caFilePath, clientCrtFilePath, clientKeyFilePath, ""); options.setSocketFactory(socketFactory); Publish MQTT Messages Create a class PublishSample that will publish a Hello MQTT message to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.MqttClient; import org.eclipse.paho.client.mqttv3.MqttConnectOptions; import org.eclipse.paho.client.mqttv3.MqttException; import org.eclipse.paho.client.mqttv3.MqttMessage; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class PublishSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; String content = "Hello MQTT"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // connect client.connect(options); // create message and setup QoS MqttMessage message = new MqttMessage(content.getBytes()); message.setQos(qos); // publish message client.publish(topic, message); System.out.println("Message published"); System.out.println("topic: " + topic); System.out.println("message content: " + content); // disconnect client.disconnect(); // close client client.close(); } catch (MqttException e) { throw new RuntimeException(e); } } } Subscribe Create a class SubscribeSample that will subscribe to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.*; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class SubscribeSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "subscribe_client"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); // connect options MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // setup callback client.setCallback(new MqttCallback() { public void connectionLost(Throwable cause) { System.out.println("connectionLost: " + cause.getMessage()); } public void messageArrived(String topic, MqttMessage message) { System.out.println("topic: " + topic); System.out.println("Qos: " + message.getQos()); System.out.println("message content: " + new String(message.getPayload())); } public void deliveryComplete(IMqttDeliveryToken token) { System.out.println("deliveryComplete---------" + token.isComplete()); } }); client.connect(options); client.subscribe(topic, qos); } catch (Exception e) { e.printStackTrace(); } } } MqttCallback: connectionLost(Throwable cause): This method is called when the connection to the server is lost. messageArrived(String topic, MqttMessage message): This method is called when a message arrives from the server. deliveryComplete(IMqttDeliveryToken token): Called when delivery for a message has been completed and all acknowledgments have been received. Test Next, run SubscribeSample to subscribe to the mqtt/test topic. Then run PublishSample to publish the message on the mqtt/test topic. We will see that the publisher successfully publishes the message, and the subscriber receives it. Summary Now we are done using Paho Java Client as an MQTT client to connect to the public MQTT server and implement message publishing and subscription. The full code is available on GitHub.

By Zhiwei Yu
Adaptive Sampling in an IoT World
Adaptive Sampling in an IoT World

The Internet of Things (IoT) is now an omnipresent network of connected devices that communicate and exchange data over the internet. These devices can be anything from industrial machinery monitoring, weather and air quality monitoring systems, and security cameras to smart thermostats and refrigerators to wearable fitness trackers. As the number of IoT devices increases, so does the volume of data they generate. A typical application of this data is to improve the performance and efficiency of the systems being monitored and gain insights into their users' behavior and preferences. However, the sheer volume makes it challenging to collect and analyze such data. Furthermore, a large volume of data can overwhelm both the communication channels and the limited amounts of power and processing on edge devices. This is where adaptive sampling techniques come into play. These techniques can reduce workload, maximize resource utilization requirements, and improve the accuracy and reliability of the data. Adaptive Sampling Adaptive sampling techniques "adapt" their sampling or transmission frequency based on the specific needs of the device or changes in the system of interest. For example, consider a device on a limited data plan, a low-power battery, or a compute-restricted platform. Examples: A temperature monitoring sensor may collect data more frequently when there are rapid changes in temperature and less frequently when the temperature remains stable. A security camera captures images at a faster frame rate or higher resolution when there is some activity in the field of view. An air particulate meter increases its sampling rate when it notices the air quality deteriorating. A self-driving car constantly senses the environment but may send special edge cases back to a central server for edge case discovery. What and Where to Sample Your expected resource utilization improvements guide what to and where to sample. There are two sites to implement sampling: At measurement or transmission. Sampling at measurement: The edge device will only measure (or update measurement frequency) when an algorithm (either running on the edge device or on a server) deems fit. Reduces power and computing. Periodically improves network bandwidth utilization. Sampling at transmission: The edge device measures continuously and processes this with some algorithm running locally. If the sample is high entropy, upload data to the cloud/ server. Power and compute at measurement unaffected. Reduces network bandwidth utilization. Identifying Important and Useful Data We have often heard the term "data, data, data." But is all data equal? Not really. Data is most useful when it brings information. This is true even for Big Data applications that are admittedly data-hungry. For example, Machine Learning and Statistical systems all need "high quality" data, not just large quantities. So how do we find high-quality data? Entropy! Entropy Entropy is the measurement of uncertainty in the system. In a more intuitive explanation, entropy is the measure of "information" in a system. For example, a system with a constant value or constant rate of change (say temperature). In optimal working conditions, there is no new information. You will get the expected measurement every time you sample; this is low entropy. On the other hand, if the temperature changes "noisily" or "unexpectedly," the entropy in the system is high; there is new and interesting information. The more unexpected the change, the larger the entropy and the more important that measurement. Entropy in Information Theory. When the probability of occurrence 'p(x)' is low, entropy is high, and vice versa. A measurement probability of 1 (something we really expect is going to happen) yields 0 entropy, and rightly so. This principle of "informational value" is central to adaptive sampling. Some State of the Art Techniques The basic logic flow in all adaptive techniques is: Using "Model Predictions" to understand the information contained in the new measurements (sampled data). These "Model Prediction" algorithms analyze past data and identify patterns that help to predict if a high entropy event is likely to occur, allowing the system to focus its data collection efforts. The magic lies in how well we can model our predictions. Adaptive Filtering methods: These methods apply filtering techniques on measurements to estimate measurements in the next time steps. These could be FIR (Finite Impulse Response) or IIR (Infinite Impulse Response) techniques like: Weighted moving average (can be made more expressive with probabilistic or exponential treatment) Sliding window-based methods They are relatively low in complexity but may have a non-trivial memory footprint to buffer past measurements. Need small amounts of data for configuring. Kalman Filter methods: Kalman Filters are efficient and have small memory footprints. They can be relatively complex and hard to configure but work well when tuned correctly. Need small amounts of data for configuring. Machine Learning methods: Using past collected data, we can build machine learning models to predict the next state of the system under observation. These are the more complex but also generalize well. Depending on the task and complexity, large amounts of data may be needed for training. Major Benefits Improved efficiency: By collecting and analyzing data from a subset of the available data, IoT devices can reduce workload and resource requirements. This helps improve efficiency and performance and reduces data collection, analysis, and storage costs. Better accuracy: By selecting the data sources that are most likely to provide the most valuable or informative data, adaptive sampling techniques can help to improve the accuracy and reliability of the data. This can be particularly useful for making decisions or taking actions based on the data. Greater flexibility: Adaptive sampling techniques allow IoT devices to adapt to changes in the data sources or the data itself. This can be particularly useful for devices deployed in dynamic or changing environments, where the data may vary over time. Reduced post-processing complexity: By collecting and analyzing data from a subset of the available data sources, adaptive sampling techniques can help to reduce the complexity of the data and make it easier to understand and analyze. This can be particularly useful for devices with limited processing power or storage capacity or teams with limited data science/ engineering resources. Potential Limitations Selection bias: By selecting a subset of the data, adaptive sampling techniques may introduce selection bias into the data. This can occur if the models and systems are trained on a specific type of data, which is not representative of the overall data population, leading to inaccurate or unreliable conclusions. Sampling errors: There is a risk of errors occurring in the sampling process, which can affect the accuracy and reliability of the data. These errors may be due to incorrect sampling procedures, inadequate sample size, or non-optimal configurations. Resource constraints: Adaptive sampling techniques may require additional processing power, storage capacity, or bandwidth, which may not be available on all IoT devices. This can limit adaptive sampling techniques on specific devices or in certain environments. Runtime complexity: Adaptive sampling techniques may involve the use of machine learning algorithms or other complex processes, which can increase the complexity of the data collection and analysis process. This could be challenging for devices with limited processing power or storage capacity. Workarounds Staged deployment: Instead of deploying a sampling scheme on all devices, deploy on small but representative test groups. Then the "sampled" data from these groups can be analyzed against the more expansive datasets for biases and domain mismatches. Again, this can be done in stages and iteratively, ensuring our system is never highly biased. Ensemble of Sampling techniques: Different devices can be armed with slightly different sampling techniques, varying from sample sizes and windows to different algorithms. Sure, this increases the complexity of post-processing, but it takes care of sampling errors and selection biases. Resource constraints and Runtime complexity are hard to mitigate. Unfortunately, that is the cost of implementing better sampling techniques. Finally, test, test, and more tests. Takeaways Adaptive Sampling can be a useful tool for IoT if one can model the system being observed. We briefly introduced a few modeling approaches with varying complexities. We discussed some benefits, challenges, and solutions for deployment.

By Ankur Agarwal
Three Reasons Why IoT Security Needs To Be a Priority in 2023
Three Reasons Why IoT Security Needs To Be a Priority in 2023

People increasingly use Internet of Things (IoT) devices in today’s society. Unfortunately, IoT security risks have increased with the popularity of these products. People must allocate time and other resources to improve their security in 2023, or they could put themselves and others at risk. Here are some of the most pressing security concerns to target in the coming year. 1. People Use More IoT Devices Than Non-IoT Products Most people have heard staggering statistics about the number of currently used IoT devices or connected devices within the next few years. However, they sometimes saw those statistics as a representation of what would eventually happen, but not yet. However, an amazing thing that came to pass in 2020 is the number of active and connected IoT devices surpassed the number of non-IoT devices used. It is not hard to imagine how that happened when considering the sheer number of items falling under the IoT umbrella. A person might use a fitness tracker during a gym visit, then rely on a smart coffeemaker, smart washing machine, and smart lights once they get home. When it is time to sleep, they might drift off on a connected mattress that adjusts to their positions. Of course, that is just on the consumer level. Companies worldwide use industrial Internet of Things products to track critical processes in real-time, find the sources of assembly line backups, and ensure workers move in ergonomically friendly ways to prevent injuries. The increased connectivity provides workplace managers with more visibility over operations, but it also broadens the potential attack surface for cybercriminals to target. Fortunately, users can take steps to reduce the chances of potential attacks, ranging from changing the devices’ default passwords to keeping software updated. Sometimes, people are so eager to start using their IoT devices that security becomes an afterthought. However, failing to be proactive against IoT security risks could put users more at risk for device or network compromise. People must never assume manufacturers have made IoT products as safe as possible out of the box. It is far more likely that the items will need numerous security tweaks to become sufficiently safeguarded against IoT security risks. 2. Known IoT Vulnerabilities Becoming More Common IoT security professionals and others interested in stronger cybersecurity purposefully look for product weaknesses cybercriminals could exploit. The hope is for non-malicious parties to come across those issues first, and the affected companies can fix the problems before they become widespread. A coordinated vulnerability disclosure happens when the people who find something wrong give the company time to fix it before telling the public about the fault. In the best cases, businesses are quick to act and release security patches to address recently found issues. However, some companies do nothing, even after security researchers repeatedly try to engage with them about what they have discovered. IoT security risks are also becoming increasingly common. Research indicated such vulnerability disclosures rose by 57% during the first half of 2022 compared with the previous six months. The data also showed third-party security companies accounted for 45% of those disclosures, followed by IoT device vendors mentioning 29% of them. Finally, independent research outlets found and informed about 19% of the issues. Another interesting takeaway was vulnerabilities emerged from firmware and software almost equally. More specifically, 48% were software-related issues, while 46% were in the firmware. Speaking of firmware, the report revealed 40% of the identified vulnerabilities in that aspect got fully or partially remediated. That was a significant jump over the previous six months — only 21% fell into those categories. Information about previously undetected vulnerabilities is an excellent way to limit IoT security risks. However, the ideal situation happens when people find problems before products arrive on the market. 3. Hackers Are Targeting IoT Devices More Often There is not just an increase in security problems with IoT devices — a related trend shows hackers choosing to attack IoT products more often than they once did. That is probably happening for several reasons. Firstly, with IoT products becoming more popular, hackers have more options regarding which devices they attack and how. Relatedly, attacking devices that are widespread throughout society makes it easier to get more devastating results. There is also the fact that IoT manufacturers are working within tight timeframes, trying to get the latest, greatest products on the market before competitors develop something similar. The IoT lacks global standards for producers to follow. Thus, there is no easy way for purchasers to see how well specific IoT devices stack up against others in terms of security. A study of the last quarter of 2022 indicated IoT malware attacks went up by 98%. There was also a 22% rise in types of malware seen for the first time. That suggests cybercriminals are getting more creative with their methods, which could pose problems for security teams trying to tighten organizations’ defenses against IoT security risks. Sometimes, companies have so many IoT devices used by employees or within the industrial environment that they are not even sure how many connected products they have. That is problematic because it makes it harder to confirm if attacks occurred. When it takes longer to pinpoint network infiltrations, hackers have more opportunities to wreak havoc within the organization. Hackers can also cast extensive nets when orchestrating their attacks. Consider the example of identified vulnerabilities that could affect more than 100 million IoT devices used at the consumer and enterprise levels. IoT Security Risks Are Rampant Most of today’s connected products have security problems to some degree. That is no reason to avoid using IoT devices, but it is a potent reminder that people must know and follow security best practices to increase protection against attacks. Security researchers have already found and warned people about specific threats to their devices. The vulnerabilities will almost certainly rise as new devices get released, and more prominent market segments start using them. Maintaining a security-first mindset at the factory level would make those security weaknesses less likely. However, the purchasers of IoT devices must educate themselves on the basic steps to follow. It is also great if they stay abreast of how hackers are attacking IoT devices and which products are most at risk. Even though some attacks are entirely new, others follow distinctive patterns. IoT security flaws will always exist. However, working diligently to reduce the associated risks is a safe and practical action to take while using these products in 2023 and beyond.

By Emily Newton
Why Memory Allocation Resilience Matters in IoT
Why Memory Allocation Resilience Matters in IoT

Memory allocation is one of those things developers don’t think too much about. After all, modern computers, tablets, and servers count so much space that memory often seems like an infinite resource. And, if there is any trouble, a memory allocation failure or error is so unlikely that the system normally defaults to program exit. This is very different, however, when it comes to the Internet of Things (IoT). In these embedded connected devices, memory is a limited resource and multiple programs fight over how much they can consume. The system is smaller and so is the memory. Therefore, it is best viewed as a limited resource and used conservatively. It’s in this context that memory allocation — also known as malloc — takes on great importance in our sector. Malloc is the process of reserving a portion of the computer memory in the execution of a program or process. Getting it right, especially for devices connected to the internet, can make or break performance. So, let’s take a look at how developers can build resilience into their malloc approach and what it means for connected device performance going forward. Malloc and Connected Devices: A Short History Let’s start from the beginning. Traditionally, malloc has not been used often in embedded systems. This is because older devices didn’t typically connect to the internet and, therefore, counted vastly different memory demands. These older devices did, however, create a pool of resources upon system start which to allocate resources. A resource could be a connection and a system could be configured to allow n connections from a statically allocated pool. In a non-internet-connected system, the state of a system is normally somewhat restricted and therefore the upper boundaries of memory allocation are easier to estimate. But this can change drastically once an embedded system connects to the internet. For example, a device can count multiple connections and each can have a different memory requirement based on what the connection is used for. Here, the required buffer memory for a data stream on a connection is dependent on the latency of the connection to obtain a certain throughput using some probability function for packet losses or other network-dependent behavior. This is normally not a problem on modern high-end systems. But, remember that developers face restricted memory resources in an embedded environment. So, you cannot simply assume there is enough memory. This is why it is very important in IoT embedded development to think about how to create software that is resilient to memory allocation errors (otherwise known as malloc fails). Modern Embedded Connected Systems and Malloc In modern connected embedded systems, malloc is more frequently used and many embedded systems and platforms have decent malloc implementation. The reason for the shift is that modern connected embedded systems do more tasks and it is often not feasible to statically allocate the maximum required resources for all possible executions of the program. This shift to using malloc actively in modern connected embedded systems requires more thorough and systematic software testing to uncover errors. Usually, allocation errors are not tested systematically since it is often thought of as something which happens with such a small probability that it is not worth the effort. Since allocation errors are so rare, any bugs can live for years before being found. Mallocfail: How to Test for Errors The good news is that developers can leverage software to test allocation errors. A novel approach is to run a program and inject allocation errors in all unique execution paths where allocation happens. This is made possible with the tool mallocfail. Mallocfail, as the name suggests, helps test malloc failures in a deterministic manner. Rather than random testing, the tool automatically enumerates through different control paths to malloc failure. It was inspired by this Stack Overflow answer. In a nutshell, this tool overrides malloc, calloc, and realloc with custom versions. Each time a custom allocator runs, the function uses libbacktrace to generate a text representation of the current call stack, and then generates a sha256 hash of that text. The tool then checks to see if the new hash has already been seen. If it has never been seen, then the memory allocation fails. The hash is stored in memory and written to disk. If the hash — the particular call stack — has been seen before, then the normal libc version of the allocator is called as normal. Each time the program starts, the hashes that have already been seen are loaded in from disk. This is something that I’ve used first-hand and found very useful. For example, at my company, we successfully tested mallocfail on our embedded edge software development kit. I’m pleased to report that the tool actually managed to identify a few problems in the SDK and its third-party libraries. As a result, the former problems are now fixed and the latter have received patches. Handling Malloc Fails Handling allocation errors can be a bit tricky in a complex system. For example, consider the need to allocate data to handle an event. Different patterns exist to circumvent this problem. The most important is to allocate the necessary memory such that an error can be communicated back to the program in case of an allocation failure, and such that some code path does not fail silently. The ability to handle malloc fails is something that my team thinks about often. Sure, it’s not much of a problem on other devices, but it can cause big issues on embedded devices connected to the internet. For this reason, our SDK counts the functionality to limit certain resources including connections, streams, stream buffers, and more. This is such that a system can be configured to limit the amount of memory used such that malloc errors are less likely to happen (and then it is just a resource allocation error). Often, a system running out of memory results in a system struggling to perform. So it really makes sense to lower the probability of allocation errors. This is often handled by limiting which functionality/tasks that can occur simultaneously. As someone who’s been working in this field for two decades, I believe developers should embrace best malloc practices when it comes to modern embedded connected devices. My advice is to deeply consider how your embedded device resolves malloc issues and investigate the most efficient way of using your memory. This means designing with dynamic memory allocations and testing as much as possible. The performance and usability of your device count on it.

By Carsten Rhod Gregersen
Building a 24-Core Docker Swarm Cluster on Banana Pi Zero
Building a 24-Core Docker Swarm Cluster on Banana Pi Zero

If you are a software developer, DevOps engineer, or system administrator, you have a use case for single-board computers like the Raspberry Pi. I spent 6 months actively tracking the availability of Raspberry Pi 4 until I got the chance to buy some and build a beautiful 8-node cluster to run MariaDB products on it. Unfortunately, Raspberry Pi devices are out of stock these days. Although they become available from time to time and you can use rpilocator.com to track availability, if you are lucky to find one, don’t expect it to be cheap (although this could change in 2023). Fortunately, there are alternatives, one of the cheapest ones being the Banana Pi M2 Zero. I knew that MariaDB runs on Raspberry Pi Zero 2 W since I had previously created a portable database server on it. But would the Banana Pi M2 Zero be able to run a MariaDB database? Well, yes! Of course. No surprise here since the Banana Pi M2 Zero has almost the same technical specs as the Raspberry Pi Zero 2 W: Quad-core processor 512M DDR3 RAM Onboard Wi-Fi Since I suspect that you might want to build your own physical cluster with single-board computers and might be frustrated by the unavailability and high prices of Raspberry Pi, I decided to take on the challenge of creating a cheap cluster (at least as cheap as it can get) using Banana Pi devices and documenting the process so it’s easier for you to replicate. Enjoy! What You Need Here’s what you need to buy: Banana Pi M2 Zero: Technically, you need only 2 devices to make a cluster but to really be able to have a good lab for experimentation on distributed systems, you need at least 3 and ideally 4 or more. I went crazy and got 6 of them. Where can you buy them? You’ll have to Google it. Remember to check the availability. Even though they are much more available than Raspberry Pi, it’s a good idea to double-check that the store you pick has them in stock and ready for delivery. USB cables: Each Banana Pi needs to be powered via a micro USB port. You need a cable with one end of type micro USB and the other of the type your power supply accepts (see next point). USB power supply: To power the devices, I recommend a dedicated USB power supply. You could also just use individual chargers, but the setup will be messier and require a power strip with as many outlets as devices as you have. It’s better to use a USB multi-port power supply. I used an Anker PowerPort 6, but there are also good and cheaper alternatives. You’ll have to Google this too. Check that each port can supply 5V and at least 2.4A. MicroSD cards: Even though the Banana Pi M2 Zero has 8G of onboard eMMC flash memory, it’s easier to set it up using a micro SD card. Get one per device. Try to use fast ones—it will make quite a difference in performance! I recommend at least 16GB. For reference, I used SanDisk Extreme Pro Micro/SDXC with 32GB which offers a write speed of 90 MB/s and reads at 170 MB/s. Optional: MicroSD card reader: If your computer doesn’t have an SD card port, you’ll need a USB SD card reader to be able to connect your card to your computer and install the operating system. Cluster case: You can let your Banana Pi devices spread over your desk if you want to, but chances are that you would be more up to playing with your cluster if it is easy to move around. Since the Banana Pi M2 Zero shares the form factor of the Raspberry Pi Zero, any cluster case for the Raspberry Pi Zero also works for the Banana Pi M2 Zero. The simplest and cheapest option is to use bolts and nuts to stack up the boards (check this example by Jeff Geerling). I recommend one of the cases from The Pi Hut. Check out the Mini Cluster Case for Raspberry Pi Zero or this Mini Cluster Case for Raspberry Pi Zero 2 (with Fans). GPIO headers: If you want to connect your Banana Pi to the external world through things such as sensors, you’ll have to put your soldering skills into practice because you’ll need to solder a GPIO header on each device. Fans: Not the ones you’ll get online once you publish a pic of your new shiny cluster; no, the ones that cool down your little computers. This will improve performance at the cost of noise (not too loud) and price (not too expensive). You don’t really need them but if you decide to go for it, make sure that your cluster case can hold the fans and that you have installed the GPIO headers (see the previous point). Heat sinks: Heat sinks can be used as an alternative to fans or in conjunction with them. Any heat sink for the Raspberry Pi Zero works with the Banana Pi M2 Zero. Antennas: I have to say that a negative point of the Banana Pi M2 Zero is the poor Wi-Fi connection it has. This can be improved using external antennas, though. Check this review by Bret Weber for a comparison of different kinds of antennas compatible with the Banana Pi M2 Zero. Wi-Fi repeater: If you don’t want to install individual antennas or want to be able to connect your cluster to different networks, for example when traveling, a Wi-Fi repeater is an excellent option. With this, you can connect your little computers to the repeater and only change the connection configuration in the repeater when you move from one network to another. If you put the repeater close to the cluster, you don’t need to install individual antennas on each device. This works great for me since as a Developer Advocate, I can take the cluster with me and use it during live demos without having to reconfigure the connection on each node. Assembling the Cluster One of the fun parts of building this cluster is the physical assembly of the boards on a case. If you want to use a case and fans, start by soldering the GPIO headers on the Banana Pi boards. It had been years since I used a soldering iron for the last time, so I struggle with the first boards. I recommend watching videos on YouTube and practicing if you don’t have previous experience or your soldering skills feel a bit rusty. The main key to soldering the pins properly is to realize that you need to make the pin and the pad on the board hot, not the solder. Keep it in mind. Depending on the case that you select, the process of assembling varies. I’m sure you’ll figure it out. I first mounted the boards and the fans on each layer and then stack them one by one. Something I also did, was testing that each fan was working correctly before mounting it on the layer. In fact, one of the fans was broken and produces a much louder noise than the others. Fortunately, I had a spare fan and was able to replace it immediately. As you start to stack layers, the rack starts to get in shape and when you mount the last layer, you get a rewarding feeling. The last step is to connect all the boards to the USB power supply and turn them on. However, before that, we need to install the operating system on the micro SD cards. Installing Armbian There are Linux and Android operating system images for the Banana Pi M2 Zero. We are going to use Linux here. Specifically, Armbian 21.08.1. You can download the image from this page (Armbian_21.08.1_Bananapim2zero_focal_current_5.10.60.img.xz). To install Armbian on the micro SD cards, you need a program like balenaEtcher. Download and install this tool on your computer. Select the Armbian image file and the drive that corresponds to the micro SD card. Flash the image and repeat the process for each micro SD card. Configuring Banana Pi M2 Zero Headless (For Real) I tried different things to configure the Banana Pi M2 Zero to automatically connect to my Wi-Fi network during the first boot, but I didn’t succeed. I defaulted to preconfigure the network connection using a static IP so that I could later continue further configurations through SSH without having to scan ports (when using DHCP) or using external monitors. Configuring the Wi-Fi Connection To configure the Wi-Fi connection, Armbian includes the /boot/armbian_first_run.txt.template file which allows you to configure the operating system when it runs for the first time. The template includes instructions, so it’s worth checking. You have to rename this file to armbian_first_run.txt. Here’s what I used: FR_general_delete_this_file_after_completion=1 FR_net_change_defaults=1 FR_net_ethernet_enabled=0 FR_net_wifi_enabled=1 FR_net_wifi_ssid='my_2.4G_connection_id>' FR_net_wifi_key='my_password' FR_net_wifi_countrycode='FI' FR_net_use_static=1 FR_net_static_gateway='192.168.1.1' FR_net_static_mask='255.255.255.0' FR_net_static_dns='192.168.1.1 8.8.8.8' FR_net_static_ip='192.168.1.171' Use your own Wi-Fi details including connection name (you have to use a 2.4G connection), password, country code, gateway, mask, and DNS. Remember to use the connection to the Wi-Fi repeater if you are using one. Another caveat is that I wasn’t able to read the SD card from macOS. I had to use another laptop with Linux on it to make the changes to the configuration file on each SD card. To mount the SD card on Linux, run the following command before and after inserting the SD card and see what changes: Shell sudo fdisk -l In my case, the SD card is at /dev/mmcblk0p1: Now I can mount the SD card: Shell sudo mount /dev/mmcblk0p1 /media/sdcard/ Since I had 6 nodes I created a Bash script to automate the process. The script accepts as a parameter the IP to set. For example: Shell sudo ./armbian-setup.sh 192.168.1.171 I run this command on each of the six SD cards changing the IP address from 192.168.1.171 to 192.168.1.176. Connecting Through SSH Now it’s time for the fun part. Insert the flashed and configured micro SD cards on each Banana Pi M2 Zero and turn the power supply on. Be patient! Give the small devices time to boot. The first time it can take several minutes. Use the ping command to check whether the device is ready and connected to the network: Shell ping 192.168.1.171 Once it responds, connect to the mini-computer through SSH using the root user and the IP address that you configured: Shell ssh root@192.168.1.171 The default password is: 1234 Follow the steps to finish the configuration and repeat the process for each Banana Pi. Setting up the Cluster Using Ansible Armbian has the armbian-config CLI tool that allows you to configure the system and install additional software using an interactive text-based user interface. However, since we are building a cluster with potentially hundreds or thousands of nodes (not really, only six in my case, but if you have tons of them, please let me know), it’s better to automate this process. Ansible is a great tool for this. Installing and Configuring Ansible You have to install Ansible on your computer and generate a configuration file: Shell sudo su ansible-config init --disabled -t all > /etc/ansible/ansible.cfg exit In the /etc/ansible/ansible.cfg file, set the following properties (enable them by removing the semicolon): Shell host_key_checking=False become_allow_same_user=True This just makes the whole process easier. Never do this in a production environment! Create an Ansible Inventory An Ansible inventory is a file that lists the machines on which it will operate. You can define the inventory in the /etc/ansible/hosts file. Since I have six nodes I defined the following inventory: ############################################################################## # 6-node Banana Pi cluster ############################################################################## [bpies] 192.168.1.171 ansible_user=pi hostname=bpi01 192.168.1.172 ansible_user=pi hostname=bpi02 192.168.1.173 ansible_user=pi hostname=bpi03 192.168.1.174 ansible_user=pi hostname=bpi04 192.168.1.175 ansible_user=pi hostname=bpi05 192.168.1.176 ansible_user=pi hostname=bpi06 [bpies_manager] bpi01.local ansible_user=pi [bpies_workers] bpi[02:06].local ansible_user=pi Adjust the values to your own setup. Add one manager node to the bpies_manager group and one or more nodes to the bpies_workers group. These will be the Docker Swarm manager and worker nodes respectively. Now we are ready to automate the configuration process! Configuring Nodes and Setting up Docker Swarm With Ansible Ansible includes the concept of playbooks. A playbook is a list of tasks that can be executed on the hosts defined in an inventory. Since I promised it’d be easier for you, I created Ansible playbooks for setting up a Docker Swarm cluster on your Banana Pi devices. Clone the following GitHub repository: Shell git clone https://github.com/alejandro-du/banana-pi-cluster-ansible-playbooks.git The repository includes several .yml files. Each file is an Ansible playbook. Let’s start by configuring the LEDs (so that they blink when there’s activity), setting a hostname, and installing Avahi (so that you can forget about IPs). Run the following: Shell ansible-playbook configure-hosts.yml --ask-become-pass Introduce the password that you configured for the root user and press <Enter> twice. Pay attention to the LEDs in your Banana Pi boards. They should start blinking on processor activity. Now run the following to set up a Docker Swarm cluster on your Banana Pi devices: Shell ansible-playbook docker-swarm.yml --ask-become-pass SSH into the manager node and run the following: docker node ls You should see your Docker Swarm cluster ready: Congrats! You have a running Docker Swarm cluster running on Banana Pi M2 Zero! Deploying a Replicated MariaDB Database on Docker Swarm MariaDB is one of the most popular open-source relational databases. It supports any workload through its purpose-built pluggable storage engines. It has analytical capabilities (with ColumnStore) and unlimited write scalability with high availability (with Xpand). It also supports basic replication with a primary node and multiple replicas, a perfect use case for a Banana Pi cluster. One of the cool things about open-source products is that their ecosystems are usually huge. At least for popular projects. This is the case with MariaDB. The Banana Pi M2 Zero features a 32-bit ARM processor, so it was easy for me to find a MariaDB Docker image on Docker Hub built for this architecture. I extended the image to simplify the deployment of a replicated MariaDB topology on Docker Swarm suitable for demos and experimentation. Be warned that these images are not for production environments! They are only meant to be used in demos where there’s no sensitive information or need for support. You can see the source code of the images on this GitHub repository. With Docker Swarm, you can easily deploy (and remove) a stack defined in a Docker compose file. SSH into your manager node and create a new file with the name replication.stack.yml and the following content: YAML version: "3.9" services: primary: image: alejandrodu/mariadb-arm-primary ports: - "3306:3306" replica: image: alejandrodu/mariadb-arm-replica environment: PRIMARY_SERVER_IP_ADDRESS: primary ports: - "3307:3306" deploy: replicas: 5 Pretty simple, right? You just have to set the number of replicas for the replica service (in the last line of the file) to the number of Banana Pi boards minus 1 (the other one is for the primary service). I have 6 boards, so I specified 5 replicas. The goal is to have one container per Banana Pi board. Deploy the stack as follows: Shell docker stack deploy -c replication.stack.yml mariadb Docker downloads and creates the services (primary and replica) with the number of replicas that you specified. You can check the status by running: Shell docker stack mariadb ps To test the database and replication features, you can use your computer to connect to the MariaDB database using the MariaDB CLI client (mariadb), or a GUI tool such as DBeaver, HeidiSQL, or even VS Code through a SQL client extension. You can connect to the database through any node even if no containers for the service are running on that node. Docker forwards the request to a node running the service. The Docker images automatically create the following credentials to access a database named demo: Database user: user Password: password Here’s a screenshot of the connection details I used to connect to the primary service (the primary database) using the Database Client extension for VS Code: The reason this connects to the primary database is that the stack we defined in the replication.stack.yml file forwards requests on port 3306 to a container running the primary service. On the other hand, requests to port 3307 are forwarded to one of the containers running the replica service. Once connected to the primary service using port 3306 through any node, check to which host are you connected: MariaDB SQL SELECT @@hostname; This is the hostname assigned by Docker to the container running the primary database. Take note of this value (mine was 3ac9e65c3601). Now create a new table and insert some data: MariaDB SQL CREATE OR REPLACE TABLE demo.message( id INT AUTO_INCREMENT PRIMARY KEY, content TINYTEXT NOT NULL ) engine=Aria; INSERT INTO demo.message(content) VALUES ("It works!"); Notice that for fun, and to show you something new (maybe), I used the TINYTEXT data type which allows for max 255 characters. Our cluster is tiny so I figured our table column should be tiny as well. Silly jokes aside, also notice that I used the Aria storage engine just to illustrate the concept of pluggable storage engines in MariaDB. Most of the time, what you need is the default InnoDB engine and you can run cross-join queries on tables that use different storage engines in the same SQL query. There are many storage engines available: check them out! Aria is suitable for ready-heavy workloads but it doesn’t support foreign keys and other features available in InnoDB. For this demo, Aria works just great. Before moving on, check that the data is stored in the primary database: MariaDB SQL SELECT * FROM demo.message; Create another database connection similar to the previous one, but this time, use port 3307 to connect to one of the replica nodes. Docker will pick one for you. Again, you can do this using any SQL client. To show you a different example, here’s how to connect to a replica database using the MariaDB CLI client: mariadb --host 192.168.1.171 --port 3307 -u user -p This time you have to use the IP address instead. You can use the IP address of any of the Banana Pi computers. Check to which node you are connected this time: MariaDB SQL SELECT @@hostname; You’ll get the hostname of the container that Docker selected to serve the replica service. I got f236189e5f4c, which is different from 3ac9e65c3601 (the host running the primary database). This means you are connected to a different container and very likely to a different Banana Pi board. We inserted the data in a different node (the primary database), let’s see whether the data is being automatically replicated to the replica node to which we are connected: MariaDB SQL SELECT * FROM demo.message; You should see the data replicated: Success! Data replication is working. Your applications can load-balance reads through all the replica nodes if they connect to the database using port 3307 and perform writes if they connect to the database using port 3306. This could be further improved by using a database proxy configured to perform transparent read-write splitting, but this is out of the scope of this article. What’s Next? You now have a great lab to learn distributed computing concepts, tools, and technologies. You saw only one example (a distributed database) but there’s much more that you can do with your new shiny cluster. Try for example Dockerizing a stateless application (for instance a web service) and make it connect to the database you just deployed. You might want to explore visualization tools as well. For example, you can deploy the Docker Swarm Visualizer as follows (or equivalent in a Docker compose file): Shell docker service create \ --name=viz \ --publish=9000:8080 \ --constraint=node.role==manager\ --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ alexellis2/visualizer-arm:latest Point your browser to http://bpi01.local:9000 and see how the containers are distributed in the cluster: Deploy your own applications and enjoy your new Banana Pi M2 Zero cluster!

By Alejandro Duarte CORE
How To Use MQTT in Dart
How To Use MQTT in Dart

Dart is a client-optimized language for developing fast apps on any platform. Its goal is to offer the most productive programming language for multi-platform development, paired with a flexible execution runtime platform for app frameworks. MQTT is a lightweight IoT messaging protocol based on publish/subscribe model, which can provide real-time and reliable messaging services for connected devices with minimal code and bandwidth. It is widely used in industries, such as IoT, mobile Internet, smart hardware, Internet of vehicles, and power and energy. This article mainly introduces how to use the mqtt_client library in the Dart project to realize the connection, subscription, message sending, and receiving between the client and MQTT Broker. Preparation The examples in this article are based on the macOS environment. Get the SDK Please refer to Get the SDK Shell $ brew tap dart-lang/dart $ brew install dart $ dart --version Dart SDK version: 2.13.0 (stable) (Wed May 12 12:45:49 2021 +0200) on "macos_x64" Initializing the Project Shell $ dart create -t console-full mqtt_demo $ cd mqtt_demo The directory structure is as follows. Plain Text ├── CHANGELOG.md ├── README.md ├── analysis_options.yaml ├── bin │ └── mqtt_demo.dart ├── pubspec.lock └── pubspec.yaml Installing Dependencies In this article, we use mqtt_client as the MQTT client library and install it by running the following command. Shell $ dart pub add mqtt_client This will add a line like this to the project's pubspec.yaml file: YAML dependencies: mqtt_client: ^9.6.2 Using MQTT We will use the free public MQTT broker that was created based on the MQTT Cloud Service - EMQX Cloud. The server access information is as follows: Broker: broker.emqx.io TCP Port: 1883 Websocket Port: 8083 Connecting to the MQTT Server Edit bin/mqtt_demo.dart file. Dart import 'dart:async'; import 'dart:io'; import 'package:mqtt_client/mqtt_client.dart'; import 'package:mqtt_client/mqtt_server_client.dart'; final client = MqttServerClient('broker-cn.emqx.io', '1883'); Future<int> main() async { client.logging(on: true); client.keepAlivePeriod = 60; client.onDisconnected = onDisconnected; client.onConnected = onConnected; client.pongCallback = pong; final connMess = MqttConnectMessage() .withClientIdentifier('dart_client') .withWillTopic('willtopic') .withWillMessage('My Will message') .startClean() .withWillQos(MqttQos.atLeastOnce); print('client connecting....'); client.connectionMessage = connMess; try { await client.connect(); } on NoConnectionException catch (e) { print('client exception - $e'); client.disconnect(); } on SocketException catch (e) { print('socket exception - $e'); client.disconnect(); } if (client.connectionStatus!.state == MqttConnectionState.connected) { print('client connected'); } else { print('client connection failed - disconnecting, status is ${client.connectionStatus}'); client.disconnect(); exit(-1); } return 0; } /// The unsolicited disconnect callback void onDisconnected() { print('OnDisconnected client callback - Client disconnection'); if (client.connectionStatus!.disconnectionOrigin == MqttDisconnectionOrigin.solicited) { print('OnDisconnected callback is solicited, this is correct'); } exit(-1); } /// The successful connect callback void onConnected() { print('OnConnected client callback - Client connection was sucessful'); } /// Pong callback void pong() { print('Ping response client callback invoked'); } Then, execute Shell $ dart run bin/mqtt_demo.dart We will see that the client has successfully connected to the MQTT broker. Instructions MqttConnectMessage: set connection options, including timeout settings, authentication, and last-wish messages. Example of Certificate Connection: Dart /// Security context SecurityContext context = new SecurityContext() ..useCertificateChain('path/to/my_cert.pem') ..usePrivateKey('path/to/my_key.pem', password: 'key_password') ..setClientAuthorities('path/to/client.crt', password: 'password'); client.secure = true; client.securityContext = context; Subscribe Add the following code. Dart client.onSubscribed = onSubscribed; const topic = 'topic/test'; print('Subscribing to the $topic topic'); client.subscribe(topic, MqttQos.atMostOnce); client.updates!.listen((List<MqttReceivedMessage<MqttMessage?>>? c) { final recMess = c![0].payload as MqttPublishMessage; final pt = MqttPublishPayload.bytesToStringAsString(recMess.payload.message); print('Received message: topic is ${c[0].topic}, payload is $pt'); }); /// The subscribed callback void onSubscribed(String topic) { print('Subscription confirmed for topic $topic'); } Then, execute Shell $ dart run bin/mqtt_demo.dart We see that we have successfully subscribed to the MQTT topic. Publish Message Dart client.published!.listen((MqttPublishMessage message) { print('Published topic: topic is ${message.variableHeader!.topicName}, with Qos ${message.header!.qos}'); }); const pubTopic = 'test/topic'; final builder = MqttClientPayloadBuilder(); builder.addString('Hello from mqtt_client'); print('Subscribing to the $pubTopic topic'); client.subscribe(pubTopic, MqttQos.exactlyOnce); print('Publishing our topic'); client.publishMessage(pubTopic, MqttQos.exactlyOnce, builder.payload!); We see that the message has been published successfully and we receive it. Complete Test We use the following code for the complete test. Dart import 'dart:async'; import 'dart:io'; import 'package:mqtt_client/mqtt_client.dart'; import 'package:mqtt_client/mqtt_server_client.dart'; final client = MqttServerClient('broker-cn.emqx.io', '1883'); Future<int> main() async { client.logging(on: false); client.keepAlivePeriod = 60; client.onDisconnected = onDisconnected; client.onConnected = onConnected; client.onSubscribed = onSubscribed; client.pongCallback = pong; final connMess = MqttConnectMessage() .withClientIdentifier('dart_client') .withWillTopic('willtopic') .withWillMessage('My Will message') .startClean() .withWillQos(MqttQos.atLeastOnce); print('Client connecting....'); client.connectionMessage = connMess; try { await client.connect(); } on NoConnectionException catch (e) { print('Client exception: $e'); client.disconnect(); } on SocketException catch (e) { print('Socket exception: $e'); client.disconnect(); } if (client.connectionStatus!.state == MqttConnectionState.connected) { print('Client connected'); } else { print('Client connection failed - disconnecting, status is ${client.connectionStatus}'); client.disconnect(); exit(-1); } const subTopic = 'topic/sub_test'; print('Subscribing to the $subTopic topic'); client.subscribe(subTopic, MqttQos.atMostOnce); client.updates!.listen((List<MqttReceivedMessage<MqttMessage?>>? c) { final recMess = c![0].payload as MqttPublishMessage; final pt = MqttPublishPayload.bytesToStringAsString(recMess.payload.message); print('Received message: topic is ${c[0].topic}, payload is $pt'); }); client.published!.listen((MqttPublishMessage message) { print('Published topic: topic is ${message.variableHeader!.topicName}, with Qos ${message.header!.qos}'); }); const pubTopic = 'topic/pub_test'; final builder = MqttClientPayloadBuilder(); builder.addString('Hello from mqtt_client'); print('Subscribing to the $pubTopic topic'); client.subscribe(pubTopic, MqttQos.exactlyOnce); print('Publishing our topic'); client.publishMessage(pubTopic, MqttQos.exactlyOnce, builder.payload!); print('Sleeping....'); await MqttUtilities.asyncSleep(80); print('Unsubscribing'); client.unsubscribe(subTopic); client.unsubscribe(pubTopic); await MqttUtilities.asyncSleep(2); print('Disconnecting'); client.disconnect(); return 0; } /// The subscribed callback void onSubscribed(String topic) { print('Subscription confirmed for topic $topic'); } /// The unsolicited disconnect callback void onDisconnected() { print('OnDisconnected client callback - Client disconnection'); if (client.connectionStatus!.disconnectionOrigin == MqttDisconnectionOrigin.solicited) { print('OnDisconnected callback is solicited, this is correct'); } exit(-1); } /// The successful connect callback void onConnected() { print('OnConnected client callback - Client connection was sucessful'); } /// Pong callback void pong() { print('Ping response client callback invoked'); } Summary Now we've finished connecting to the public MQTT server using the mqtt_client library in Dart and implemented the connection, message publishing, subscription, and test between the client and the MQTT server.

By Zhiwei Yu
Build Test Scripts for Your IoT Platform
Build Test Scripts for Your IoT Platform

In a previous article, I introduced the open-source test tool JMeter and used a simple HTTP test as an example to demonstrate its capabilities. This article shows you how to build test scripts for complex test scenarios. The user interface displays a JMeter test script in the "tree" format. The saved test script (in the .jmxformat) is XML. The JMeter script tree treats a test plan as the root node, and the test plan includes all test components. In the test plan, you can configure user-defined variables called by components throughout the entire test plan. Variables can also thread group behavior, library files used in the test, and so on. You can build rich test scenarios using various test components in the test plan. Test components in JMeter generally have the following categories: Thread group Sampler Logic controller Listener Configuration element Assertion Timer Pre-processor Post-processor Thread Groups A thread group is the beginning point for all test plans (so all samplers and controllers must be placed under a thread group). A thread group can be regarded as a virtual user pool in which each thread is essentially a virtual user, and multiple virtual users perform the same batch of tasks simultaneously. Each thread is independent and doesn't affect the others. During the execution of one thread, the variable of the current thread doesn't affect the variable value of other threads. In this interface, the thread group can be configured in various ways. 1. Action to Be Taken After a Sampler Error The following configuration items control whether a test continues when an error is encountered: Continue: Ignore errors and continue execution. Start Next Thread Loop: Ignore the error, terminate the current loop of the thread, and execute the next loop. Stop Thread: Stop executing the current thread without affecting the normal execution of other threads. Stop Test: Stop the entire thread after executing threads have finished the current sampling. Stop Test Now: The entire test execution stops immediately, even if it interrupts currently executing samplers. 2. Number of Threads This is the number of concurrent (virtual) users. Each thread runs the test plan completely independently without interfering with any others. The test uses multiple threads to simulate concurrent access to the server. 3. Ramp-Up Period The ramp-up time sets the time required to start all threads. For example, if the number of threads is set to 10 and the ramp-up time is set to 100 seconds, then JMeter uses 100 seconds to start and runs 10 threads (each thread begins 10 seconds after the previous thread was started). If the ramp-up value is set small and the number of threads is set large, there's a lot of stress on the server at the beginning of the test. 4. Loop Count Sets the number of loops per thread in the thread group before ending. 5. Delay Thread Creation Until Needed By default, all threads are created when the test starts. If this option is checked, threads are created when they are needed. 6. Specify Thread Lifetime Control the execution time of thread groups. You can set the duration and startup delay (in seconds). Samplers A sampler simulates user operations. It's a running unit that sends requests to the server and receives response data from the server. A sampler is a component inside a thread group, so it must be added to the thread group. JMeter natively supports a variety of samplers, including a TCP Sampler, HTTP Request, FTP Request, JDBC Request, Java Request, and so on. Each type of sampler sends different requests to the server according to the set parameters. TCP Sampler The TCP Sampler connects to the specified server over TCP/IP, sends a message to the server after the connection is successful, and then waits for the server to reply. The properties that can be set in the TCP Sampler are as follows: TCPClient Classname This represents the implementation class that handles the request. By default, org.apache.jmeter.protocol.tcp.sampler.TCPClientImpl is used, and plain text is used for transmission. In addition, JMeter also has built-in support for BinaryTCPClientImpl and LengthPrefixedBinaryTCPClientImpl. The former uses hexadecimal packets, and the latter adds a 2-byte length prefix to BinaryTCPClientImpl. You can also provide custom implementation classes by extending org.apache.jmeter.protocol.tcp.sampler.TCPClient. Target server settings: Server Name or IP and Port Number specify the hostname or IP address and port number of the server application. Connection Options: Determines how you connect to the server. Re-use connection: If enabled, this connection is always open; otherwise, it's closed after reading data. Close Connection: If enabled, this connection is closed after the TCP sampler has finished running. Set No-Delay: If enabled, the Nagle algorithm is disabled, and the sending of small packets is allowed. SO_LINGER: Controls whether to wait for data in the buffer to complete transmission before closing the connection. End of line (EOL) byte value: Determines the byte value at the end of the line. The EOL check is skipped if the specified value is greater than 127 or less than -128. For example, if a string returned by the server ends with a carriage return, you can set this option to 10. Timeouts: Set the connect timeout and response timeout. Text to send: Contains the payload you want to send. Login configuration: Sets the username and password used for the connection. HTTP Request Sampler The HTTP Sampler sends HTTP and HTTPS requests to the web server. Here are the settings available: Name and comments Protocol: Set the protocol to send the request to the target server, which can be HTTP, HTTPS, or FILE. The default is HTTP. Server name or IP address: The hostname or IP address of the target server to which the request is sent. Port number: The port number that the web service listens on. The default port is 80 for HTTP and 443 for HTTPS. Request method: The method for sending the request, commonly including GET, POST, DELETE, PUT, TRACE, HEAD, OPTIONS, and so on. Path: The target URL (excluding server address and port) to request. Content encoding: How to encode the request (applicable to POST, PUT, PATCH, and FILE). Advanced request options: A few extra options, including: Redirect Automatically: Redirection is not treated as a separate request and is not recorded by JMeter. Follow Redirects: Each redirection is treated as a separate request and is recorded by JMeter. Use KeepAlive: If enabled, Connection: keep-alive is added to the request header when JMeter communicates with the target server. Use multipart/form-data for POST: If enabled, requests are sent using multipart/form-data or application/x-www-form-urlencoded. Parameters: JMeter uses parameter key-value pairs to generate request parameters and send these request parameters in different ways depending on the request method. For example, for GET, DELETE requests, parameters are appended to the request URL. Message body data: If you want to pass parameters in JSON format, you must configure the Content-Type as application/json in the request header. File upload: Send a file in the request. The HTTP file upload behavior can be simulated in this way (usually). Logic Controllers The JMeter Logic Controller controls the execution logic of components. The JMeter website explains it like this: "Logic Controllers determine the order in which Samplers are processed." The Logic Controller can control the execution order of the samplers. Therefore, the controller needs to be used together with the sampler. Except for the once-only controller, other logic controllers can be nested within each other. Logic controllers in JMeter are mainly divided into two categories. They can control the logical execution order of nodes during the execution of the test plan (a loop or conditional controller), or they can act in response to specific throughput or transaction count. Transaction Controller Sometimes, you want to count the overall response time of a group of related requests. In this case, you need to use a Transaction Controller. The Transaction Controller counts the sampler execution time of all child nodes under the controller. If multiple samplers are defined under the Transaction Controller, then the transaction is considered successful only when all samplers run successfully. Add a transaction controller using the contextual menu: Generate parent sample: If enabled, the Transaction Controller is used as a parent sample for other samplers. Otherwise, the Transaction Controller is only used as an independent sample. For example, the unchecked Summary Report is as follows: If checked, the Summary Report is as follows: Include duration of timer: If enabled, include a timer (a delay is added before and after the sampler runs). Once Only Controller The Once Only Controller, as its name implies, is a controller that executes only once. The request under the controller is executed only once during the loop execution process under the thread group. For tests that require a login, you can consider putting the login request in a Once Only Controller because the login request only needs to be executed once to establish a session. If you set the loop count to 2 and check the result tree after running, you can see that the HTTP request 3under the Once Only Controller is only executed once, and other requests are executed twice. Listeners A listener is a series of components that process and visualize test result data. View Results Tree, Graph Results, and Aggregate Report are common listener components. View Results Tree This component displays the result, request content, response time, response code, and response content of each sampler in a tree structure. Viewing the information can assist in analyzing whether there is a problem. It provides various viewing formats and filtering methods and can also write the results to specified files for batch analysis and processing. Configuration Element Configuration element provides support for static data configuration. It can be defined at the test plan level, or at the thread group or sampler level, with different scopes for different levels. Configuration elements mainly include User Defined Variables, CSV Data Set Config, TCP Sampler Config, HTTP Cookie Manager, etc. User-Defined Variables By setting a series of variables, you cause a random selection of values to be used in the performance test. Variable names can be referenced within the scope, and variables can be referenced as ${variable name}. In addition to the User Defined Variables component, variables can also be defined in other components, such as Test Plans and HTTP Requests: For example, a defined variable is referenced in an HTTP Request: Viewing the execution results, you can see that the value of the variable has been obtained: CSV Data Set Config During a performance test, you may need parameterized input, such as the username and password, in the login operation. When the amount of concurrency is relatively large, the data generation at runtime causes a heavy burden on the CPU and memory. The CSV Data Set Config can be used as the source of parameters required in this scenario. The descriptions of some parameters in the CSV Data Set Config: Variable name: Defines the parameter name in the CSV file, which the script can reference as ${variable name}. Recycle on EOF: If set to True, this allows looping again from the beginning when reaching the end of the CSV file. Stop thread on EOF: If set to True, this stops running after reading the last record in the CSV file. Sharing mode: Sets the mode shared between threads and thread groups. Assertions The Assertion checks whether the request is returned as expected. Assertions are an important part of automated test scripts, so you should pay great attention to them. JMeter commonly used assertions include Response Assertion, JSON Assertion, Size Assertion, Duration Assertion, Beanshell Assertion, and so on. Below I introduce the frequently-used JSON Assertion. JSON Assertion This is used to assert the content of the response in JSON format. A JSON Assertion is added on an HTTP Sampler in this example, as shown in the following image: The root of the JSON path is always called $, which can be represented by two different styles: dot-notation (.) or bracket-notation ([]). For example; $.message[0].name or $['message'][0]['name']. Here's an example of a request made to https://www.google.com/doodles/json/2022/11. The $[0].namevalue represents the 'name' part in the first array element in the response. The Additionally assert value specifies that the value of 'name' is to be verified, and the Expected value is expected to be '2022-world-cup-opening-day'. Run the script and look at the results. You can see that the assertion has passed. Here are the possible conditions and how they're treated: If a response result is not in JSON format, it's treated as a failure. If the JSON path cannot find the element, it fails. If the JSON path finds the element, but no conditions are set, it passes. If the JSON path finds an element that does not meet the conditions, it fails. If the JSON path finds the element that meets the conditions, it passes. If the JSON path returns an array, it iterates to determine whether any elements meet the conditions. If yes, it passes. If not, it fails. Go back to JSON Assertion and check the Invert assertion. Run the script, check the results, and you can see that the assertion failed: Timers The pause time between requests in the performance test is called "thinking time." In the real world, the pause time can be spent on content search or reading, and the Timer simulates this pause. All timers in the same scope are executed before the samplers. If you want the timer to be applied to only one of the samplers, add the timer to the child node of the sampler. JMeter timers mainly include Constant Timer, Uniform Random Timer, Precise Throughput Timer, Constant Throughput Timer, Gaussian Random Timer, JSR223 Timer, Poisson Random Timer, Synchronizing Timer, and BeanShell Timer. Constant Timer A Constant Timer means that the interval between each request is a fixed value. After configuring the thread delay to 100 and 1000, respectively, run the script: Check the data in the table, where #1 and #2 are the running results when the configuration is 100 milliseconds, and #4 and #5 are the running results when the configuration is 1000 milliseconds. You can see that the interval between #4 and #5 is significantly greater than that between #1 and #2: Constant Throughput Timer The Constant Throughput Timer controls the execution of requests according to the specified throughput. Configure the target throughput as 120 (note that the unit is minutes), and then select All active threads in current thread group (shared) based on the calculated throughput: Run the script, check the results, and observe that the throughput is approximately 2/second (120/60). Pre-Processors and Post-Processors A pre-processor performs some operations before the sampler request. It's often used to modify parameters, set environment variables, or update variables. Similarly, a post-processor performs some operations after the sampler request. Sometimes, the response data needs to be used in subsequent requests, and you need to process the response data. For example, if the jwt token in the response is obtained and used for authentication in subsequent requests, the post-processor is used. Using JMeter The above is the introduction to the main test components of JMeter, and now you can feel confident in starting your own tests. In another article, I will explain using the MQTT plugin in JMeter.

By Chongyuan Yin
Sending Sensor Data From Raspberry Pi Pico W to HiveMQ Cloud
Sending Sensor Data From Raspberry Pi Pico W to HiveMQ Cloud

In a previous post, "MQTT Messaging With Java and Raspberry Pi," I described how data could be sent from a Raspberry Pi Linux board and Raspberry Pi Pico microcontroller to HiveMQ Cloud. Raspberry Pi Pico W On June 30, 2022, Raspberry Pi released a new product: the Pico W, a new version of the original Pico, but with Wi-Fi onboard. The new board is for sale for $6, compared to the $4 of the original Pico. The new Pico W has the same form factor as the original Pico. There is a minor change in the wiring as the connection of the onboard LED has changed. You should be able to swap to the new version in most existing projects without any problems. With this new Pico W, we can simplify the setup of the previous article, as the separate Wi-Fi module is no longer needed. About HiveMQ Cloud HiveMQ Cloud is an online MQTT-compatible service that is totally free for up to 100 devices! Even for the most enthusiastic maker, that’s a lot of microcontrollers or computers! On the HiveMQ Blog, an article by Kudzai Manditereza was published that also describes how to use the Pico W. MicroPython In the previous project with Pico, CircuitPython was used to code the program. Unfortunately, when I started this project to connect Pico W to HiveMQ Cloud, CircuitPython was not yet available with support for the WiFi module of the Pico W. That ticket on GitHub now seems to be closed and resolved, so I leave it up to you to try out. In this post, we will be using MicroPython. “MicroPython is a lean and efficient implementation of the Python 3 programming language that includes a small subset of the Python standard library and is optimized to run on microcontrollers and in constrained environments.”. To install MicroPython on the Pico W, follow this step-by-step provided by Raspberry Pi. Source Code The full info to set up the project, all required libraries, and the full code can be found in this post. The full source code is also available on GitHub. Let's take a look at the most important parts of the code. Connecting to Wi-Fi By using a secrets.py-file we can separate the credentials into a separate file and load it into our main code with Python from secrets import secrets From then on, we can use all its values where needed, e.g. for the W-Fi credentials: Python wlan = network.WLAN(network.STA_IF) wlan.active(True) wlan.connect(secrets["ssid"], secrets["password"]) For a secure connection to the MQTT broker, our device must have a valid timestamp. This can be easily achieved with ntptime: Python ntptime.host = "de.pool.ntp.org" ntptime.settime() Connecting to the MQTT broker and sending a message can be quickly done using the library /lib/umqtt/simple.py provided on GitHub in the MicroPython project, and the following code: Python sslparams = {'server_hostname': secrets["broker"]} mqtt_client = MQTTClient(client_id="picow", server=secrets["broker"], port=secrets["port"], user=secrets["mqtt_username"], password=secrets["mqtt_key"], keepalive=3600, ssl=True, ssl_params=sslparams) mqtt_client.connect() print('Connected to MQTT Broker: ' + secrets["broker"]) # Send a test message to HiveMQ mqtt_client.publish('test', 'HelloWorld') As you can read in the full blog post and on the HiveMQ forum, connecting this way to HiveMQ Cloud is not fully secured and requires additional steps to get a copy of the certificate to be validated on the Pico W. But for most projects, the above approach will be sufficient. Another library /lib/hcsr04/hcsr04.py that is available on GitHub in a project by rsc1975 (Roberto), helps us to get the value from a distance sensor to send to the MQTT broker like this: Python distance = hcsr04.distance_cm() json = "{\"value\": " + str(distance) + "}" mqtt_client.publish("picow/distance", json) Conclusion Although it took me some time to find a good library and the correct connection parameters to connect to HiveMQ Cloud, it turned out to be a fun project with a lot of possibilities to be further extended!

By Frank Delporte

Top IoT Experts

expert thumbnail

Frank Delporte

Java Developer - Technical Writer,
CodeWriter.be

Frank Delporte is a technical writer at Azul, blogger on webtechie.be and foojay.io, author of "Getting started with Java on Raspberry Pi" (https://webtechie.be/books/), and contributor to Pi4J. Frank blogs about his experiments with Java, sometimes combined with electronic components, on the Raspberry Pi.
expert thumbnail

Tim Spann

Developer Advocate,
StreamNative

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎
expert thumbnail

Carsten Rhod Gregersen

Founder, CEO,
Nabto

Carsten Rhod Gregersen is the CEO and Founder of Nabto, a P2P IoT connectivity provider that enables remote control of devices with secure end-to-end encryption.
expert thumbnail

Emily Newton

Editor-in-Chief,
Revolutionized

Emily Newton is a journalist who regularly covers stories for the tech and industrial sectors. She loves seeing the impact technology can have on every industry.

The Latest IoT Topics

article thumbnail
Real-Time Stream Processing With Hazelcast and StreamNative
In this article, readers will learn about real-time stream processing with Hazelcast and StreamNative in a shorter time, along with demonstrations and code.
January 27, 2023
by Timothy Spann
· 1,883 Views · 2 Likes
article thumbnail
Cloud Native London Meetup: 3 Pitfalls Everyone Should Avoid With Cloud Data
Explore this session from Cloud Native London that highlights top lessons learned as developers transitioned their data needs into cloud-native environments.
January 27, 2023
by Eric D. Schabell CORE
· 1,389 Views · 3 Likes
article thumbnail
Fraud Detection With Apache Kafka, KSQL, and Apache Flink
Exploring fraud detection case studies and architectures with Apache Kafka, KSQL, and Apache Flink with examples, guide images, and informative details.
January 26, 2023
by Kai Wähner CORE
· 2,466 Views · 1 Like
article thumbnail
Upgrade Guide To Spring Data Elasticsearch 5.0
Learn about the latest Spring Data Elasticsearch 5.0.1 with Elasticsearch 8.5.3, starting with the proper configuration of the Elasticsearch Docker image.
January 26, 2023
by Arnošt Havelka CORE
· 2,152 Views · 1 Like
article thumbnail
Data Mesh vs. Data Fabric: A Tale of Two New Data Paradigms
Data Mesh vs. Data Fabric: Are these two paradigms really in contrast with each other? What are their differences and their similarities? Find it out!
January 26, 2023
by Paolo Martinoli
· 2,136 Views · 1 Like
article thumbnail
Do Not Forget About Testing!
This article dives into why software testing is essential for developers. By the end, readers will understand why testing is needed, types of tests, and more.
January 26, 2023
by Lukasz J
· 2,803 Views · 1 Like
article thumbnail
The Role of Data Governance in Data Strategy: Part II
This article explains how data is cataloged and classified and how classified data is used to group and correlate the data to an individual.
January 25, 2023
by Satish Gaddipati
· 2,190 Views · 5 Likes
article thumbnail
Revolutionizing Supply Chain Management With AI: Improving Demand Predictions and Optimizing Operations
How are AI and ML being used to revolutionize supply chain management? What are the latest advancements and best practices?
January 25, 2023
by Frederic Jacquet CORE
· 1,946 Views · 1 Like
article thumbnail
Public Cloud-to-Cloud Repatriation Trend
This article discusses why organizations are moving away from the public cloud, what cloud repatriation is, its implications, and cloud repatriation statistics.
January 24, 2023
by Kiran Jewargi
· 2,481 Views · 1 Like
article thumbnail
AIOps Being Powered by Robotic Data Automation
Data is the cornerstone of business conversions and bots are accelerating the transformation.
January 24, 2023
by Tom Smith CORE
· 2,746 Views · 1 Like
article thumbnail
What Is Blockchain Trilemma and How Could It Be Solved?
The blockchain trilemma is the most complicated problem to fix. This piece perfectly explains the blockchain trilemma and how to solve it.
January 24, 2023
by Mary Forest
· 766 Views · 1 Like
article thumbnail
5 Factors When Selecting a Database
Here's how to tell when a database is right for your project.
January 24, 2023
by Peter Corless
· 2,679 Views · 5 Likes
article thumbnail
Building Angular Library and Publishing in npmjs Registry
In this article, I will share my experience of publishing my first Angular library to npmjs.
January 23, 2023
by Siddhartha Bhattacharjee
· 2,019 Views · 2 Likes
article thumbnail
How Analytics and Data Science Improve Your Business Efficiency
In this article, we discuss real-time reporting, existing data interpretation, and data analysis tools that can be leveraged for better analytics.
January 23, 2023
by Rahul Asthana
· 17,175 Views · 5 Likes
article thumbnail
What Should You Know About Graph Database’s Scalability?
Graph database scalability how-to, designing a distributed database system, graph database query optimization.
January 20, 2023
by Ricky Sun
· 4,568 Views · 6 Likes
article thumbnail
Explainer: Building High Performing Data Product Platform
Building a high-performing data product requires a strategy and the essential functionalities' clarity. Here's a quick overview.
January 19, 2023
by Yash Mehta
· 2,949 Views · 3 Likes
article thumbnail
Simulate Network Latency and Packet Drop In Linux
In this article, we will understand how tc command in Linux can be used to simulate network slowness and how we can simulate packet corruption.
January 19, 2023
by Chandra Shekhar Pandey
· 3,118 Views · 1 Like
article thumbnail
Integration: Data, Security, Challenges, and Best Solutions
Explore the essentials of integration and lays a theoretical foundation for integrating systems using cloud and on-premises.
January 18, 2023
by Hariprasad Kapilavai
· 3,188 Views · 1 Like
article thumbnail
How to Add Test Cases to Your React Typescript App
This blog will help you learn the React Testing Library and how to use it to test your React application.
January 18, 2023
by Kiran Beladiya
· 2,824 Views · 1 Like
article thumbnail
Configuring OpenTelemetry Agents to Enrich Data and Reduce Observability Costs
Learn how to easily to build and manage telemetry pipelines to ship data from IT environments of any kind and size to any analysis tool or storage destination.
January 18, 2023
by Paul Stefanski
· 1,963 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: