Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
IoT, or the Internet of Things, is a technological field that makes it possible for users to connect devices and systems and exchange data over the internet. Through DZone's IoT resources, you'll learn about smart devices, sensors, networks, edge computing, and many other technologies — including those that are now part of the average person's daily life.
Exploring Edge Computing: Delving Into Amazon and Facebook Use Cases
Exploring the Evolution and Impact of Computer Networks
In Introduction to MQTT Publish-subscribe Pattern, we learned that we need to initiate a subscription with the server to receive corresponding messages from it. The topic filter specified when subscribing determines which topics the server will forward to us, and the subscription options allow us to customize the forwarding behavior of the server further. In this article, we will focus on exploring the available subscription options in MQTT and their usage. Subscription Options A subscription in MQTT consists of a topic filter and corresponding subscription options. So, we can set different subscription options for each subscription. MQTT 5.0 introduces four subscription options: QoS, No Local, Retain As Published, and Retain Handling. On the other hand, MQTT 3.1.1 only provides the QoS subscription option. However, the default behavior of these new subscription options in MQTT 5.0 remains consistent with MQTT 3.1.1. This makes it user-friendly if you plan to upgrade from MQTT 3.1.1 to MQTT 5.0. Now, let's explore the functions of these subscription options together. QoS QoS is the most commonly used subscription option, which represents the maximum QoS level that the server can use when sending messages to the subscriber. A client may specify a QoS level below 2 during subscription if its implementation does not support QoS 1 or 2. Additionally, if the server's maximum supported QoS level is lower than the QoS level requested by the client during the subscription, it becomes apparent that the server cannot meet the client's requirements. In such cases, the server informs the subscriber of the granted maximum QoS level through the subscription response packet (SUBACK). The subscriber can then assess whether to accept the granted QoS level and continue communication. A simple calculation formula: The maximum QoS granted by the server = min ( The maximum QoS supported by the server, The maximum QoS requested by the client ) However, the maximum QoS level requested during subscription does not restrict the QoS level used by the publishing end when sending messages. When the requested maximum QoS level during subscription is lower than the QoS level used for message publishing, the server will not ignore these messages. To maximize message delivery, it will downgrade the QoS level of these messages before forwarding. Similarly, we have a simple calculation formula: QoS in the forwarded message = min ( The original QoS of the message, The maximum QoS granted by the server ) No Local The No Local option has only two possible values, 0 and 1. A value of 1 indicates that the server must not forward the message to the client that published it, while 0 means the opposite. This option is commonly used in bridging scenarios. Bridging is essentially two MQTT Servers establishing an MQTT connection, then they subscribe to some topics from each other. The server forwards client messages to another server, which can continue forwarding them to its clients. Let's consider the most straightforward example where we assume two MQTT servers, Server A and Server B. They have subscribed to the topic # from each other. Now, Server A forwards some messages from the client to Server B, and when Server B looks for a matching subscription, Server A is there too. If Server B forwards the messages to Server A, then Server A will forward them to Server B again after receiving them, thus falling into an endless forwarding storm. However, if both Server A and Server B set the No Local option to 1 while subscribing to the topic #, this problem can be ideally avoided. Retain As Published The Retain As Published option also has two possible values, 0 and 1. Setting it to 1 means the server should preserve the Retain flag unchanged when forwarding application messages to subscribers, and setting it to 0 means that the Retain flag must be cleared. Like the No Local option, Retain As Published primarily applies in bridge scenarios. We know that when the server receives a retained message, in addition to storing it, it will also forward it to existing subscribers like a normal message, and the Retain flag of the message will be cleared when forwarding. This presents a challenge in bridge scenarios. Continuing with the previous setup, when Server A forwards a retained message to Server B, the Retain flag is cleared, causing Server B not to recognize it as a retained message and not store it. This makes retained messages unusable across bridges. In MQTT 5.0, we can let the bridged server set the “Retain” publish option to 1 when subscribing to solve this problem. Retain Handling The Retain Handling subscription option indicates to the server whether to send retained messages when a subscription is established. When a subscription is established, the retained messages matching the subscription in the server will be delivered by default. However, there are cases where a client may not want to receive retained messages. For example, if a client reuses a session during connection but cannot confirm whether the previous connection successfully created the subscription, it may attempt to subscribe again. If the subscription already exists, the retained messages may have been consumed, or the server may have cached some messages that arrived during the client's offline period. In such cases, the client may not want to receive the retained messages the server publishes. Additionally, the client may not want to receive the retained message at any time, even during the initial subscription. For example, we send the state of the switch as a retained message, but for a particular subscriber, the switch event will trigger some operations, so it is helpful not to send the retained message in this case. We can choose among these three different behaviors using Retain Handling: Setting Retain Handling to 0 means that retained messages are sent whenever a subscription is established. Setting Retain Handling to 1 means retained messages are sent only when establishing a new subscription, not a repeated one. Setting Retain Handling to 2 means no retained messages are sent when a subscription is established.
If “green” was the most overused word of the 2010s, for the 2020s, surely it’s “smart.” Smartphones, smartwatches, smart homes, smart clothing, smart appliances, smart shampoo…. We made up that last one, but it wouldn’t be surprising to see it sometime. Collectively, they make up the Internet of Things (IoT) — devices connected via networks that aren’t just a browser or other apps. We created these smart devices because they make life easier. We can sit on the couch and dim the lights with our phone to start a movie, of course, but we can also make sure our doors are still locked when we’re away. These "things" have become essential to our lives, and when they aren’t working, it’s inconvenient at best and dangerous at worst. All these different things have one central feature in common: they are accessed and controlled through mobile devices, mostly our phones. And because they play such a central role in life, mobile operators are facing increased pressure to ensure there are no interruptions or lags in IoT communications. Mobile Visibility Mobile operators are accustomed to making connections like 5G available to devices, but simply having connectivity isn’t enough. Only by testing can they understand how smart devices operate under different conditions like transport mediums, infrastructure variations, and backend services. Occasional spot checks aren’t enough to ensure the user experience is where it needs to be. You need to know how far conditions can be pushed and where bottlenecks are likely to happen using real-world data. There are several key types of testing: Functionality: Each user, device, and operating system is different, so the first step is ensuring that the product works at the most basic level. Reliability: Beyond the user/device level, a test environment needs to be developed that allows you to see it functioning as part of a larger system. Security: With potentially millions of users and large amounts of data created and accessed, data privacy and authentication practices need to be validated. Because connected things produce an enormous amount of telemetry data, the sheer volume of info can hamper efforts to test network functionality. Your IoT testing strategy should include the ability to narrow down the data generated, filtering out useless information. You should also take these needs into account when developing your IoT plan. Universal applicability: Every network is unique, and with the growing complexity inherent in IT today, you need to be able to test traffic on every kind of network and device, including on-premises and cloud-based architecture. It should also integrate with the testing solutions you already have in place. Actionable insights: Information is only useful when it’s something that you can act on, and in particular, you should be able to present findings to non-technical management to inform business strategy. Your IoT testing strategy needs to deliver a simple, customizable dashboard with flexible report options. DevOps support: You should work to ensure the best business outcomes before launching a new app or service by integrating rigorous testing early in the development stage. Realistic testing: Each user represents a unique combination of factors, including geographic location, network configuration, and devices. Your testing should support the full degree of complexity that the real world will present, with highly customizable scripting options. Scalability: Sometimes pre-launch testing shows you have a viable solution, but when it’s scaled to thousands of users or more, unexpected flaws are revealed at the worst time. You should have highly scalable testing options that can script real-world user journeys for a more authentic picture of what to expect upon launch. Automation: Testing that can be done without custom coding frees up your development teams so they can work on more innovative products. You should be able to set up your testing parameters and checks and allow the testing to run with minimal human interaction. It is critical to utilize solutions and tools that help solve the complex digital performance challenges we face today. IoT testing can ensure a seamless user experience, anticipating bottlenecks and performance issues before they occur.
The journey to implementing artificial intelligence and machine learning solutions requires solving a lot of common challenges that routinely crop up in digital systems: updating legacy systems, eliminating batch processes, and using innovative technologies that are grounded in AI/ML to improve the customer experience in ways that seemed like science fiction just a few years ago. To illustrate this evolution, let’s follow a hypothetical contractor who was hired to help implement AI/ML solutions at a big-box retailer. This is the first in a series of articles that will detail important aspects of the journey to AI/ML. The Problem It’s the first day at BigBoxCo on the “Infrastructure” team. After working through the obligatory human resources activities, I received my contractor badge and made my way over to my new workspace. After meeting the team, I was told that we have a meeting with the “Recommendations” team this morning. My system access isn’t quite working yet, so hopefully, IT will get that squared away while we’re in the meeting. In the meeting room, it’s just a few of us: my manager and two other engineers from my new team, and one engineer from the Recommendations team. We start off with some introductions and then move on to discuss an issue from the week prior. Evidently, there was some kind of overnight batch failure last week, and they’re still feeling the effects of that. It seems like the current product recommendations are driven by data collected from customer orders. With each order, there’s a new association between the products ordered, which is recorded. When customers view product pages, they can get recommendations based on how many other customers bought the current product alongside different products. The product recommendations are served to users on bigboxco.com via a microservice layer in the cloud. The microservice layer uses a local (cloud) data center deployment of Apache Cassandra to serve up the results. How the results are collected and served, though, is a different story altogether. Essentially, the results of associations between products (purchased together) are compiled during a MapReduce job. This is the batch process that failed last week. While this batch process has never been fast, it has become slower and more brittle over time. In fact, sometimes, the process takes two or even three days to run. Improving the Experience After the meeting, I checked my computer, and it looked like I could finally log in. As I’m looking around, our principal engineer (PE) comes by and introduces himself. I told him about the meeting with the Recommendations team, and he gave me a little more of the history behind the Recommendation service. It sounds like that batch process has been in place for about ten years. The engineer who designed it has moved on; not many people in the organization really understand it, and nobody wants to touch it. The other problem, I begin to explain, is that the dataset driving each recommendation is almost always a couple of days old. While this might not be a big deal in the grand scheme of things, if the recommendation data could be made more up-to-date, it would benefit the short-term promotions that marketing runs. He nods in agreement and says he’s definitely open to suggestions on improving the system. Maybe a Graph Problem? At the onset, this sounds to me like a graph problem. We have customers who log on to the site and buy products. Before that, when they look at a product or add it to the cart, we can show recommendations in the form of “Customers who bought X also bought Y.” The site has this today in that the recommendations service does exactly this: It returns the top four additional products that are frequently purchased together. But we’d have to have some way to “rank” the products because the mapping of one product to every other purchased at the same time by any of our 200 million customers is going to get big, fast. So, we can rank them by the number of times they appear in an order. A product recommendation graph showing the relationship between customers and their purchased products. After modeling this out and running it on our graph database with real volumes of data, I quickly realized that this wasn’t going to work. The traversal from one product to nearby customers to their products and computing the products that appear most takes somewhere in the neighborhood of 10 seconds. Essentially, we’ve “punted” on the two-day batch problem to have each lookup, putting the traversal latency precisely where we don’t want it: in front of the customer. But perhaps that graph model isn’t too far off from what we need to do here. In fact, the approach described above is a machine learning (ML) technique known as “collaborative filtering.” Essentially, collaborative filtering is an approach that examines the similarity of certain data objects based on activity with other users, and it enables us to make predictions based on that data. In our case, we will be implicitly collecting cart/order data from our customer base, and we will use it to make better product recommendations to increase online sales. Implementation First of all, let’s look at data collection. Adding an extra service call to the shopping “place order” function isn’t too big of a deal. In fact, it already exists; it’s just that data gets stored in a database and processed later. Make no mistake: We still want to include the batch processing. But we’ll also want to process that cart data in real-time so we can feed it right back into the online data set and use it immediately afterward. We’ll start out by putting in an event streaming solution like Apache Pulsar. That way, all new cart activity is put on a Pulsar topic, where it is consumed and sent to both the underlying batch database and to help train our real-time ML model. As for the latter, our Pulsar consumer will write to a Cassandra table (shown in Figure 2) designed simply to hold entries for each product in the order. The product then has a row for all of the other products from that and other orders: CREATE TABLE order_products_mapping ( id text, added_product_id text, cart_id uuid, qty int, PRIMARY KEY (id, added_product_id, cart_id) ) WITH CLUSTERING ORDER BY (added_product_id ASC, cart_id ASC); Augmenting an existing batch-fed recommendation system with Apache Pulsar and Apache Cassandra. We can then query this table for a particular product (“DSH915” in this example), like this: SELECT added_product_id, SUM(qty) FROm order_products_mapping WHERE id='DSH915' GROUP BY added_product_id; added_product_id | system.sum(qty) ------------------+----------------- APC30 | 7 ECJ112 | 1 LN355 | 2 LS534 | 4 RCE857 | 3 RSH2112 | 5 TSD925 | 1 (7 rows) We can then take the top four results and put them into the product recommendations table, ready for the recommendation service to query by `product_id`: SELECT * FROM product_recommendations WHERE product_id='DSH915'; product_id | tier | recommended_id | score ------------+------+----------------+------- DSH915 | 1 | APC30 | 7 DSH915 | 2 | RSH2112 | 5 DSH915 | 3 | LS534 | 4 DSH915 | 4 | RCE857 | 3 (4 rows) In this way, the new recommendation data is constantly being kept up to date. Also, all of the infrastructure assets described above are located in the local data center. Therefore, the process of pulling product relationships from an order, sending them through a Pulsar topic and processing them into recommendations stored in Cassandra happens in less than a second. With this simple data model, Cassandra is capable of serving the requested recommendations in single-digit milliseconds. Conclusions and Next Steps We’ll want to be sure to examine how our data is being written to our Cassandra tables in the long term. This way we can get ahead of potential problems related to things like unbound row growth and in-place updates. Some additional heuristic filters may be necessary to add as well, like a “do not recommend” list. This is because there are some products that our customers will buy either once or infrequently, and recommending them will only take space away from other products that they are much more likely to buy on impulse. For example, recommending a purchase of something from our appliance division such as a washing machine is not likely to yield an “impulse buy.” Another future improvement would be to implement a real-time AI/ML platform like Kaskada to handle both the product relationship streaming and to serve the recommendation data to the service directly. Fortunately, we did come up with a way to augment the existing, sluggish batch process using Pulsar to feed the cart-add events to be processed in real time. Once we get a feel for how this system performs in the long run, we should consider shutting down the legacy batch process. The PE acknowledged that we made good progress with the new solution, and, better yet, that we have also begun to lay the groundwork to eliminate some technical debt. In the end, everyone feels good about that. In an upcoming article, we’ll take a look at improving product promotions with vector searching.
As IoT technology advances rapidly, it becomes easier to interact with devices and among devices. However, the new challenge in the IoT field is making the interaction more natural, efficient, and smart. Advanced Large Language Models (LLMs) such as ChatGPT, GPT-3.5, and GPT-4, created by OpenAI, have gained much popularity around the world lately. This has created many opportunities for combining General Artificial Intelligence (AGI) with the IoT domain, offering promising avenues for future progress. ChatGPT is an advanced natural language processing application that can easily achieve natural conversations with humans with its excellent natural language processing skills. Message Queuing Telemetry Transport (MQTT) is the main protocol in IoT that enables real-time and efficient data transmission through lightweight and low bandwidth communication and publish/subscribe model. By combining the MQTT protocol with ChatGPT, we can envision a future where intelligent human-machine interaction in the IoT field becomes more seamless and accessible. ChatGPT enables users to control their smart home devices using natural dialogue in the smart home field, enhancing their overall living experience. In the field of industrial automation, ChatGPT aids engineers in efficiently analyzing equipment data, leading to increased productivity and effectiveness. This blog will show you how to combine the MQTT protocol with a natural language processing application like ChatGPT and give you a simple example of using them together for intelligent applications in the IoT field. Basic Concepts Before we start, let's have a quick overview of some fundamental concepts of MQTT and ChatGPT. MQTT As mentioned earlier, the MQTT protocol is a lightweight messaging protocol that uses publish/subscribe model. It is widely applied in various fields such as IoT, mobile Internet, smart hardware, Telematics, smart city, telemedicine, power, oil, and energy. The MQTT broker is the key component for connecting many IoT devices using the MQTT protocol. We will use EMQX, a highly scalable MQTT broker, in our solution to ensure efficient and reliable connection of massive IoT devices and real-time handling and delivery of message and event stream data. We can use an MQTT client to connect to the MQTT broker and communicate with IoT devices. In this blog, we use MQTTX, a cross-platform open-source MQTT client that provides desktop, command line, and web-based applications. It can test the connection with MQTT brokers and help developers quickly develop and debug MQTT services and applications. ChatGPT GPT (Generative Pre-trained Transformer) is a deep learning model that excels at text generation and understanding. ChatGPT can comprehend and produce natural language and have natural and smooth dialogues with users. We need to use the API that OpenAI offers to communicate with the GPT model to achieve ChatGPT's natural language processing skills. ChatGPT Interface Solution Design and Preparation Utilizing the functionalities of the MQTT protocol and ChatGPT, we aim to devise a solution enabling seamless integration and interoperability between the two. We will use the API that OpenAI offers to communicate with the GPT model and write a client script to achieve ChatGPT-like natural language processing functionality. The MQTT client in this script will receive the message and send it to the API, generating the natural language response. The response will be published to a specific MQTT topic to enable the interaction cycle between ChatGPT and MQTT client. We will show the interaction process between ChatGPT and MQTT protocol for message receiving, handling, and delivery through this solution. Please follow the steps below to get ready with the necessary tools and resources. Install EMQX:You can use Docker to install and launch EMQX 5.0 quickly: Python docker run -d --name emqx -p 1883:1883 -p 8083:8083 -p 8883:8883 -p 8084:8084 -p 18083:18083 emqx/emqx:latest You can also install EMQX using RPM or DEB packages besides Docker. Please see EMQX 5.0 Installation Guide for more details. Install the MQTTX desktop application:Go to the MQTTX website and choose the version that matches your OS and CPU architecture. Then, download and install it. Sign up for an OpenAI account and get an API key:Go to OpenAI and sign in or create an account. After that, click on the top right corner and choose View API Keys. Then, under the API keys section, click Create new secret key to make a new API key. Please store this key securely as you will need it for API authentication in later programs. Once you finish these steps, you will have the tools and resources to integrate the MQTT protocol with ChatGPT. For more information and learning materials on how to work with the GPT language model using OpenAI's API, you can check out the OpenAI documentation. Coding After setting up the resources and environment, we will build an MQTT client using the Node.js environment. This client will get messages from an MQTT topic, send data to the OpenAI API, and create a natural language with the GPT model. The natural language created is then published to the specific MQTT topic for integrated interaction. You can also use other programming languages like Python, Golang, etc. based on your needs and familiarity. We will use the API directly to provide a user-friendly illustration, but you can also use the official library, which offers a simpler way to use Node.js and Python. For more information please refer to OpenAI Libraries. Set up the Node.js environment. Make sure Node.js is installed (v14.0 or higher is recommended). Make a new project folder and initialize the project with the npm init command. Then, use this command to install the required dependency packages: Python npm init -y npm install axios mqtt dotenv We use axios to send HTTP requests, mqtt to connect to MQTT servers, and dotenv to load environment variables. Use environment variables. Create a file named .env and put your OpenAI API key in it: Python OPENAI_API_KEY=your_openai_api_key Code the program. Create a new index.js file where you connect to the MQTT broker, subscribe to the specific MQTT topic, and listen for messages. When a message is received, use axios to send an HTTP request to the OpenAI API, create a natural language response, and publish the response to the specific MQTT topic. The following is a list of key codes for each step for your reference: Use the MQTT library to connect to the MQTT broker and subscribe to the chatgpt/request/+ topic by default to get incoming MQTT messages: Python const host = "127.0.0.1"; const port = "1883"; const clientId = `mqtt_${Math.random().toString(16).slice(3)}`; const OPTIONS = { clientId, clean: true, connectTimeout: 4000, username: "emqx", password: "public", reconnectPeriod: 1000, }; const connectUrl = `mqtt://${host}:${port}`; const chatGPTReqTopic = "chatgpt/request/+"; const client = mqtt.connect(connectUrl, OPTIONS); Create a genText function that runs asynchronously and takes the userId parameter. Use axios to make an HTTP client instance and authenticate with the OpenAI API key in the HTTP Headers. Then, make a POST request to the OpenAI API to generate natural language replies. Use the MQTT client to publish the generated replies to a specific topic to which the user is subscribed. Store the historical messages in the Messages array. Python // Add your OpenAI API key to your environment variables in .env const OPENAI_API_KEY = process.env.OPENAI_API_KEY; let messages = []; // Store conversation history const maxMessageCount = 10; const http = axios.create({ baseURL: "https://api.openai.com/v1/chat", headers: { "Content-Type": "application/json", Authorization: `Bearer ${OPENAI_API_KEY}`, }, }); const genText = async (userId) => { try { const { data } = await http.post("/completions", { model: "gpt-3.5-turbo", messages: messages[userId], temperature: 0.7, }); if (data.choices && data.choices.length > 0) { const { content } = data.choices[0].message; messages[userId].push({ role: "assistant", content: content }); if (messages[userId].length > maxMessageCount) { messages[userId].shift(); // Remove the oldest message } const replyTopic = `chatgpt/response/${userId}`; client.publish(replyTopic, content, { qos: 0, retain: false }, (error) => { if (error) { console.error(error); } }); } } catch (e) { console.log(e); } }; Finally, save received messages with the topic chatgpt/request/+ in the Messages array and call the genText function to generate and send natural language replies directly to the specific topic to which the user is subscribed. The Messages array can hold up to 10 historical messages. Python client.on("message", (topic, payload) => { // Check if the topic is not the one you're publishing to if (topic.startsWith(chatGPTReqTopicPrefix)) { const userId = topic.replace(chatGPTReqTopicPrefix, ""); messages[userId] = messages[userId] || []; messages[userId].push({ role: "user", content: payload.toString() }); if (messages[userId].length > maxMessageCount) { messages[userId].shift(); // Remove the oldest message } genText(userId); } }); Run the script: Python node index.js We have now finished the fundamental functional aspect of the demo project. Apart from providing the core functionality, the code incorporates a feature that allows users to have access isolation by appending distinct suffixes to specific topics. By preserving the history of previous messages, the GPT model can grasp the context of the conversation and generate responses that are more coherent and contextual, using information from past interactions. The full code is available on GitHub at openai-mqtt-nodejs. Alternative Solution Apart from the above example, another approach to speed up development is to use the EMQX's rule engine and Webhook from the data bridging function. EMQX enables the configuration of rules that initiate a Webhook callback when sending a message to a specific topic. We need to code a simple web service that uses the OpenAI API to work with the GPT model and return the replies created by the GPT model via HTTP. To accomplish the goal of integrated interaction, we have two options: either create a new MQTT client to publish the GPT model's replies to a specific topic, or directly employ the EMQX Publish API. Both approaches allow us to achieve the desired outcome of seamless interaction. This approach can save development costs and quickly build PoC or Demo for users with web services. It does not require an independent MQTT client and uses the EMQX rule engine to simplify the integration process and flexibly handle data. However, it still requires developing and maintaining web services, and Webhook may not be easy and convenient for complex application scenarios. Each of the solutions mentioned above has its benefits, and we can pick a more appropriate solution based on actual business requirements and developer skill level. In any case, EMQX, as the MQTT infrastructure, provides important support for system integration, enabling developers to create project prototypes and advance digital transformation quickly. Demo We can use the MQTTX desktop client to test this demo project after developing the interaction between the MQTT client and the GPT model. The user interface of MQTTX is similar to chat software, making it easier and more suitable for showing interaction with chatbots. First, we need to create a new connection in MQTTX that connects to the same MQTT server as the one used in the previous code examples, that is, 127.0.0.1 . Then, subscribe to the chatgpt/response/demo topic to receive replies and send messages to the chatgpt/request/demo topic. The demo suffix here can be changed to other strings to isolate access between users. We can test this by sending a Hello message: Next, we create some more complex demo environments. When the temperature of a sensor goes beyond a preset threshold, the ChatGPT robot will send an alert message to another MQTT topic, which is connected to a monitoring device, such as a smart watch or smart speaker. The monitoring device can use natural language technology to turn the alert information into speech so that users can receive and understand it more easily. We can also make a smart home environment that includes multiple MQTT topics that match different types of devices (such as lights, air conditioners, sounds, etc.). We will use ChatGPT to generate natural language commands for interacting with these devices in real-time through MQTT clients. Future Prospects By combining ChatGPT and MQTT protocol, you can create an intelligent IoT system with vast potential for smart homes and industrial automation. For example, you can use natural language to control your home devices, such as switches, brightness, color, and other parameters, and enjoy a more comfortable living environment. You can also use ChatGPT and MQTT to manage your industrial devices smartly and improve your manufacturing process. In the future, we can imagine ChatGPT or smarter AGI tools having more of a role in enhancing efficiency and productivity in the IoT field, such as: Message parsing: Analyze the MQTT messages, extract the relevant data, and prepare for further processing and analysis. Semantic understanding: Understand and process the meaning of the MQTT messages and extract more accurate information. Intelligent processing: Use AI technology to process the MQTT messages intelligently and help users find suitable solutions quickly. User feedback: Receive user feedback through MQTT and respond appropriately as an intelligent interaction agent. Virtual assistant: Control smart home devices through language recognition technology as a virtual assistant, providing users with smarter and more efficient services and improving the convenience and comfort of life. Conclusion This blog delves deep into the integration of MQTT and ChatGPT, revealing the exciting possibilities they offer in various applications. By utilizing EMQX, MQTTX, and the OpenAI API, we explore implementing an AI application similar to ChatGPT. Through seamless data reception and forwarding via MQTT, we successfully demonstrate the integration of MQTT and ChatGPT. As AI technology becomes more integrated into products (such as New Bing using GPT models for search engines and GitHub's Copilot), we think that the future trends of AI and IoT technologies will also involve enhancing natural language interactions, making device control smarter, and creating more novel use cases. These technologies are not yet part of the production environment but are on the horizon. In summary, integrating MQTT and ChatGPT shows a promising and exciting field that deserves more attention and research. We hope that these constantly developing innovative technologies will make the world a better place.
As a software developer who lives with his aging parents, I was worried about their well-being and wanted to ensure that they were safe and secure at all times. So, I created an IoT application that would help me monitor their activities in real time and alert me in case of any emergency. I started by installing sensors in different parts of the house to monitor temperature, humidity, and sound levels. I then used Solace technology to build an IoT application that would use the data from these sensors to alert me in case of any abnormal activity. I connected the sensors to the Solace messaging platform, which would send messages to my smartphone whenever there was a change in the sensor data. For instance, when the temperature in the house dropped below a certain level, I received an alert on my phone, prompting me to check on my parents’ heating system. Overall, the real-time event-driven architecture (EDA) of my IoT application gave me a sense of security and peace of mind. I was able to keep an eye on my parents’ activities at all times and respond quickly to any changes or emergencies. Solace technology proved to be a reliable and efficient messaging platform that helped me build this robust and secure IoT application. In this article, I’ll provide an overview of how I used Solace technology to build this application. Overview of Event-Driven Architecture EDA is an approach to software architecture that uses events to trigger and communicate between services, applications, and systems. Events can be anything that represents a change in state, such as a sensor reading or a user action. By processing events in real time, organizations can respond quickly to changes and opportunities, enabling faster and more accurate decision-making. Why Choose Solace Technology? When I set out to create my application, I knew that I needed a messaging platform that could handle high volumes of events in real time while providing enterprise-grade security and reliability. After researching various options, I ultimately chose Solace technology for several reasons: Its ability to handle high volumes of events and distribute them to multiple consumers in real time. This was critical for my application, which involved collecting sensor data from multiple devices and processing it in real time. Enterprise-grade security and reliability were important for me because I needed to ensure that the data collected from the sensors was secure and that the messaging platform could handle the demands of a real-world application. Overall, I’m very pleased with my decision to use Solace technology, as it has provided me with the performance, security, and reliability that I needed to build a successful solution. To showcase how I developed this real-time IoT application, I will walk you through a simple application that I created to monitor temperature and humidity. For this, I used a Raspberry Pi and a DHT11 sensor. The application sends events to a Solace message broker that I set up, which then distributes these events to multiple consumers in real time. Setting up the Raspberry Pi and DHT11 Sensor Step 1: Connect the DHT11 Sensor to the Raspberry Pi GPIO Pins When I first started working on my project, I knew I needed a temperature and humidity sensor. After some research, I decided to use the DHT11 sensor because of its digital capabilities and 1-wire protocol. To connect it to my Raspberry Pi, I followed these steps: Shut down my Raspberry Pi to avoid any electrical issues during the connection process. Located the GPIO pins on my Raspberry Pi board. They were easy to find as they were located on the top of the board. Connected the VCC pin of the DHT11 sensor to the 5V pin on my Raspberry Pi, ensuring that the voltage requirements matched. Connected the GND pin of the DHT11 sensor to the GND pin on my Raspberry Pi, ensuring the connection was secure. Connected the DATA pin of the DHT11 sensor to GPIO pin 4, which was the one I had selected to use for the project, but you can use any other GPIO pin you prefer based on your project requirements. Here is the wiring diagram I used for reference: 1 DHT11 Sensor Raspberry Pi 2 VCC -> 5V. 3 GND -> GND 4 DATA -> GPIO 4 Step 2: Install the Adafruit Python DHT Library I found that the Adafruit Python DHT library is a great tool for reading data from the DHT11 sensor using Python. Installing this library was quite easy for me. I just followed these simple steps: Opened the terminal on my Raspberry Pi. Updated my package lists by running the following command: sudo apt-get update Installed the Adafruit Python DHT library by running the following command: sudo pip3 install Adafruit_DHT Note: You may need to install pip3 first if it’s not already installed on your system. You can do this by running the following command: sudo apt-get install python3-pip Step 3: Write a Python Script That Reads Temperature and Humidity Values From the Sensor Once I connected the DHT11 sensor to my Raspberry Pi and installed the Adafruit Python DHT library, I wrote a Python script that reads temperature and humidity values from the DHT11 sensor connected to GPIO pin 4: Python 1 import Adafruit_DHT 2 3 # Set the sensor type and GPIO pin number 4 sensor = Adafruit_DHT.DHT11 5 pin = 4 6 7 # Try to read the temperature and humidity from the sensor 8 humidity, temperature = Adafruit_DHT.read_retry(sensor, pin) 9 10 # Check if the temperature and humidity values were successfully read 11 if humidity is not None and temperature is not None: 12 print('{{'Temperature':{:.1f}'.format(temperature)) 13 print('{{'Humidity':{:.1f}%'.format(humidity)) 14 else: 15 print('Failed to read sensor data.') Explanation of the code: The first line imports the Adafruit_DHT library. The second and third lines set the sensor type (DHT11) and the GPIO pin number (4) that the sensor is connected to. The fourth line tries to read the temperature and humidity from the sensor using the Adafruit_DHT.read_retry() function. This function will attempt to read the sensor data multiple times if it fails the first time. The fifth and sixth lines check if the temperature and humidity values were successfully read. If they were, the values are printed to the console. If not, an error message is printed. Configuring the Solace Message Broker Step 1: Sign Up for a Solace PubSub+ Cloud Account PubSub+ Cloud is a cloud-based messaging service that provides a fully-managed message broker. To use it, you’ll need to sign up for an account. Step 2: Configure the Raspberry Pi Python Script To Connect to the Solace Message Broker Using the Client Username and Password To connect to the Solace message broker from my Raspberry Pi Python script, I used the Solace Python API. Here’s how I configured my Python script to connect to the Solace message broker: Installed the Solace Python API on my Raspberry Pi by running the following command in the terminal: sudo pip3 install solace-semp-config Created a new service by clicking on the “Create Service” button and following the instructions. Obtained the connection details (host, username, password, and port) from the Solace Cloud console. To do this, I clicked on the “Connect” tab and then clicked on my messaging service. Updated my Python script to connect to the Solace message broker using the connection details obtained in the previous step. Here’s an example Python script that connects to the Solace message broker and publishes a message: Python 1 import solace_semp_config 2 import time 3 4 # Set up the connection details 5 solace_host = "<your-solace-host>" 6 solace_username = "<your-solace-username>" 7 solace_password = "<your-solace-password>" 8 solace_port = "<your-solace-port>" 9 solace_vpn = "<your-solace-vpn>" 10 solace_topic = "<your-solace-topic>" 11 12 # Create a new Solace API client 13 client = solace_semp_config.SempClient(solace_host, solace_username, solace_password, solace_vpn, solace_port) 14 15 # Connect to the Solace message broker 16 client.connect() 17 18 # Publish a message to the Solace message broker 19 while True: 20 humidity, temperature = Adafruit_DHT.read_retry(sensor, pin) 21 if humidity is not None and temperature is not None: 22 message = '{{"temperature":{:.1f},"humidity":{:.1f}}'.format(temperature, humidity) 23 client.publish(solace_topic, message) 24 print("Published message: " + message) 25 else: 26 print("Failed to read sensor data") 27 28 # Wait for some time before publishing the next message 29 time.sleep(5) 30 31 # Disconnect from the Solace message broker 32 client.disconnect() Explanation of the code: The first line imports the Solace Python API. The next few lines set up the connection details, including the Solace host, username, password, port, VPN, and topic. The solace_semp_config.SempClient() function creates a new Solace API client using the connection details. The client.connect() function connects to the Solace message broker. The client.publish() function publishes a message to the Solace message broker. The message is a JSON object that contains the temperature and humidity values. The time.sleep() function adds a delay of 5 seconds before publishing the next message. This is done to avoid overwhelming the message broker with too many messages at once. client.disconnect(): This line of code disconnects the Solace API client from the message broker. It’s a good practice to disconnect from the broker when you’re done using it, to ensure that you don’t leave any connections open unnecessarily. Consuming the Events I created a new Python script on my laptop, which has access to the Solace message broker. To build the necessary functionalities, I imported two modules, namely paho.mqtt.client for the MQTT client and json for parsing the incoming JSON message. I defined the callback function that gets triggered every time a message is received. My callback function was designed to parse the incoming message and display it on the console. Next, I created an MQTT client instance, and I set the client ID and username/password as appropriate for my Solace Cloud account. To connect my MQTT client to the Solace message broker, I used the appropriate connection details. I subscribed to the topic that the Raspberry Pi script is publishing to. Finally, I started the MQTT client loop, which listens for incoming messages and calls the callback function whenever a message is received. Here is an example code snippet that demonstrates these steps: Python 1 import paho.mqtt.client as mqtt 2 import json 3 4 # Define the callback function 5 def on_message(client, userdata, message): 6 payload = json.loads(message.payload) 7 print("Temperature: {}°C, Humidity: {}%".format(payload["temperature"], payload["humidity"])) 8 9 # Set up the MQTT client 10 client = mqtt.Client(client_id="my-client-id") 11 client.username_pw_set(username="my-username", password="my-password") 12 13 # Connect to the Solace message broker 14 client.connect("mr-broker.messaging.solace.cloud", port=1883) 15 16 # Subscribe to the topic 17 client.subscribe("my/topic") 18 19 # Start the MQTT client loop 20 client.loop_forever() In this example, the callback function on_message is defined to parse the incoming JSON message and print the temperature and humidity values to the console. The MQTT client is then set up with the appropriate client ID, username, and password and is connected to the Solace message broker. The client subscribes to the topic that the Raspberry Pi script is publishing to, and the client loop is started to listen for incoming messages. When a message is received, the on_message callback function is called. Best Practices and Considerations for Using Solace Technology in Real-World Scenarios When using Solace technology in real-world scenarios, there are several best practices and considerations that should be taken into account. These are important to ensure that your Solace-based real-time event-driven architecture is reliable, scalable, and secure. Let’s dive into each of these best practices and their benefits. Designing for High Availability and Scalability Designing for high availability and scalability is crucial in real-world scenarios as it ensures that your application can handle a large number of events and users without experiencing downtime or performance issues. This involves setting up Solace messaging infrastructure in a clustered, highly available, and fault-tolerant configuration. By doing so, you can ensure that your messaging infrastructure is always available and can handle any load that comes its way. Increased reliability: Your messaging infrastructure will be highly available and fault-tolerant, ensuring that your application can always communicate with Solace. Scalability: The infrastructure can easily handle an increased number of events and users, so you don’t have to worry about any performance issues. Ensuring Data Security and Privacy Data security and privacy are critical in any application, and even more so in real-time event-driven architecture. It’s important to ensure that sensitive data is protected from unauthorized access and that data is transmitted securely. This can be achieved by implementing encryption, access control, and other security measures. Improved security: Protects sensitive data from unauthorized access and ensures that the communication between Solace and other components of the application is secure. Compliance: Ensuring data security and privacy helps meet compliance requirements and protects the reputation of your organization. Monitoring and Managing Performance Monitoring and managing performance are essential to ensure that your application is performing optimally and meeting the desired service level agreements (SLAs). This involves setting up monitoring and alerting mechanisms to proactively identify and address performance issues. Increased uptime: Early detection of performance issues can prevent downtime and ensure that your application is always available. Improved performance: Proactive monitoring and management can help identify bottlenecks and improve the overall performance of your application. Integrating With Other Systems and Technologies Integrating Solace with other systems and technologies is essential to ensure that your application can communicate with other components of your application ecosystem. This involves integrating Solace with other messaging systems, databases, and other components. Improved interoperability: Integrating Solace with other systems and technologies can improve the interoperability of your application ecosystem. Increased functionality: Integration with other systems and technologies can help add new functionalities and features to your application. By following these best practices and considerations, organizations can ensure that their Solace-based real-time event-driven architecture is reliable, scalable, and secure. Designing for high availability and scalability, ensuring data security and privacy, monitoring and managing performance, and integrating with other systems and technologies are crucial for the success of your application. Implementing these best practices will help you achieve your business goals and provide a better user experience. Conclusion Finally, building a real-time IoT application using Solace technology can be an exciting and rewarding experience. Whether you are using a Raspberry Pi, an app, or both, the possibilities are endless for what you can achieve with Solace. I hope that you found this guide helpful and informative, and I encourage you to continue exploring the world of IoT and Solace technology to unlock even more potential in your projects. With the right tools and approach, you can create innovative and impactful solutions that make a real difference in the world. Happy Coding!
What Is the ESP32? The ESP32 is an incredible microcontroller developed by Espressif Systems. Based on its predecessor's legacy, the ESP8266, the ESP32 boasts dual-core processing capabilities, integrated Wi-Fi, and Bluetooth functionalities. Its rich features and cost-effectiveness make it a go-to choice for creating Internet of Things (IoT) projects, home automation devices, wearables, and more. What Is Xedge32? Xedge32, built upon the Barracuda App Server C Code Library, offers a comprehensive range of IoT protocols known as the "north bridge." Xedge32 extends the Barracuda App Server's Lua APIs and interfaces seamlessly with the ESP32's GPIOs, termed the "south bridge." While not everyone is an embedded C or C++ expert, with Xedge32, programming embedded systems becomes accessible to all. The beauty of Xedge32 lies in its simplicity: you don't need any C code experience. All you need is Lua, which is refreshingly straightforward to pick up. It's a friendly environment that invites everyone, whether you're a seasoned developer or just someone curious about microcontroller-based IoT projects. With Lua's friendly nature, diving into microcontroller programming becomes super easy. The Xedge IDE is built on the foundation of the Visual Studio Code editor. Installing the Xedge32 Lua IDE To get started with Xedge32, an ESP32 development board is your starting point. However, Xedge32 has specific requirements regarding the type of ESP32 you can employ. If you're new to the world of ESP32, it's recommended to opt for the newer ESP32-S3. Ensure it comes with 8MB RAM, although most ESP32-S3 models feature this. If your project scope involves camera integrations, consider the ESP32-S3 CAM board. The CAM board will allow you to explore exciting functionalities like streaming images to a browser via WebSockets or using the MQTT CAM example, which publishes images to an MQTT broker. While Amazon offers the ESP32-S3, marketplaces like AliExpress often present more budget-friendly options. Once you have your ESP32, proceed to the Xedge32 installation page for step-by-step firmware installation instructions. The following video shows how to install Xedge32 using Windows. Your First Xedge32 Program: Blinking an LED Many ESP32 development boards come equipped with a built-in LED. This LED can be great for simple tests, allowing you to verify if your code is running correctly quickly. However, if you wish to dive deeper into understanding the wiring and interfacing capabilities of the ESP32, we recommend getting your hands on a breadboard. The figure below provides a visual guide. A breadboard is a handy tool that lets you prototype circuits without soldering, making it perfect for beginners and experiments. Using a breadboard, you can connect multiple components, test out different configurations, and quickly see the results of your efforts. An external LED can be a great starting component for your initial projects. LEDs are simple to use, and they offer immediate visual feedback. While many DIY electronics kits come with an assortment of LEDs, you can also purchase them separately. Places like Amazon offer various LED packs suitable for breadboard projects. Remember, when working with an LED, ensure you have the appropriate resistor to prevent too much current from flowing through the LED, which could damage it. As you become more comfortable with the ESP32 and the breadboard, you can expand your component collection and experiment with more complex circuits. The Lua Blink LED Script The following Lua script shows how to create a blinking LED pattern using Lua. Here's a step-by-step breakdown: Local blink Function:The script defines a local function named blink. This function is designated to handle the LED's blinking behavior. Utilizing Coroutines for Timing:Within the blink function, an infinite loop (while true do) uses the Lua Coroutines concept for its timing mechanism. Specifically, coroutine.yield(true) is employed to make the function "sleep" for a specified duration. In this context, it pauses the loop between LED state changes for one second. LED State Control:The loop inside the blink function manages the LED's state. It first turns the LED on with pin:value(true), sleeps for a second, turns it off with pin:value(false), and then sleeps for another second. This on-off cycle continues indefinitely, creating the blink effect. GPIO Port 21 Initialization:Before the blinking starts, the GPIO port 21 is set up as an output using esp32.gpio(21,"OUT") and is referenced by the pin variable. You must modify this number if your LED is connected to a different GPIO port. If you're unfamiliar with how GPIO works, check out the GPIO tutorial to understand this concept better. Finally, the two last code lines outside the function initialize the blinking pattern, setting the timer to trigger the blink function every 1,000 milliseconds (every second). Lua local function blink() local pin = esp32.gpio(21,"OUT") while true do trace"blink" pin:value(true) coroutine.yield(true) -- Sleep for one timer tick pin:value(false) coroutine.yield(true) -- Sleep end end timer=ba.timer(blink) -- Create timer timer:set(1000) -- Timer tick = one second How To Create an Xedge Blink LED App When the Xedge32-powered ESP is running, use a browser, navigate to the ESP32's IP address, and click the Xedge IDE link to start the Xedge IDE. Create a new Xedge app as follows: Right-click Disk and click New Folder on the context menu. Enter blink as the new folder name and click Enter. Expand Disk, right-click the blink directory, and click New App in the context menu. Click the Running button and click Save. Expand the blink app now visible in the left pane tree view. The blink app should be green, indicating the app is running. Right-click the blink app and click New File on the context menu. Type blinkled.xlua and click Enter. Click the new blinkled.xlua file to open the file In Xedge. Select all and delete the template code. Copy the above blink LED Lua code and paste the content into the Xedge editor. Click the Save & Run button to save and start the blink LED example. See the Xedge IDE Video for more information on creating Xedge applications. If everything is correct, the LED should start blinking. References Online Interactive Lua Tutorials Xedge32 introduction How to upload the Xedge32 firmware to your ESP32 CAM board Lua timer API Lua GPIO API What's Next? If you're eager to explore further, there are numerous Xedge32 examples available on GitHub to kickstart your learning. However, it's worth noting that Xedge32 is still a budding tool undergoing active development. As a result, while examples are available on GitHub, comprehensive tutorials accompanying them might be sparse at the moment.
What Is IoT in the Cloud? The Internet of Things, or IoT, refers to the network of physical devices, vehicles, appliances, and other items embedded with sensors, software, and network connectivity, which enables these objects to connect and exchange data. IoT in the cloud means storing and processing the massive amounts of data generated by these interconnected devices in the cloud, rather than on local servers or in traditional data centers. Here are some of the key functions of cloud-based IoT platforms: Data storage and management: IoT devices generate a staggering amount of data. This data needs to be stored, managed, and processed efficiently. Cloud storage provides a scalable, cost-effective solution for storing and managing IoT data. Scalability and flexibility: As the number of IoT devices increases, so does the need for storage and processing power. Cloud computing provides a scalable, flexible solution that can easily adapt to these changing needs. Real-time processing and analytics: IoT devices often need to process and analyze data in real-time to provide valuable insights and make informed decisions. Cloud computing provides the necessary infrastructure and processing power to carry out these real-time operations. Cloud-based IoT platforms are transforming industries all over the world. From smart homes that can be controlled remotely through a smartphone, to intelligent manufacturing systems that can monitor and improve production efficiency, IoT in the cloud is changing the way we live and work. 8 Benefits of Cloud-Based IoT Platforms Here are some of the key benefits of integrating IoT devices with cloud-based platforms: Greater scalability: Cloud-based IoT platforms offer immense scalability. The cloud's elastic nature allows organizations to add or remove IoT devices without worrying about the infrastructure's capability to handle the increased or decreased load. As the number of connected devices in an IoT network can fluctuate, having a platform that can scale according to the need is a huge advantage. Speed and efficiency: Cloud platforms have robust processing capabilities that allow data to be processed in near real-time. IoT devices constantly generate large amounts of data that need to be processed quickly to make timely decisions. With cloud computing, this high-speed data processing becomes possible, increasing efficiency and the speed at which actions can be taken. Reduced operational costs: One of the significant advantages of cloud-based IoT platforms is the reduction in operational costs. With the cloud, businesses don't need to invest heavily in setting up physical infrastructure or worry about its maintenance. The pay-as-you-go model allows organizations to pay for only what they use, leading to considerable cost savings. Simplified device management: Cloud platforms often come with robust IoT device management features that make it easy to monitor, manage, and maintain a large number of devices. This includes functionality for remote device monitoring, firmware and software updates, troubleshooting, and more, all of which can be done from a central location. Advanced data analytics: IoT devices generate massive amounts of data. Using the power of the cloud, this data can be processed and analyzed more effectively, revealing insights that can lead to better business decisions. Furthermore, the integration of machine learning and artificial intelligence technologies can help in predictive analysis, anomaly detection, and other advanced analytics tasks. Data redundancy and recovery: Cloud-based platforms usually have excellent data redundancy and recovery protocols in place. Data is often backed up in multiple locations, which ensures that in the event of any failure or loss of data, a backup is readily available. Global accessibility: One of the key features of cloud services is the ability to access the system from anywhere in the world, as long as you have an internet connection. This allows for remote monitoring and control of IoT devices, enabling real-time responses regardless of geographical location. Improved interoperability: Cloud-based IoT platforms tend to support a wide range of protocols and standards, making it easier to integrate different types of IoT devices and applications. This improved interoperability can lead to more effective IoT solutions. Key Challenges in Cloud-Based IoT Implementation While IoT in the Cloud has significant benefits, organizations can face several challenges during implementation. Here is an overview of these challenges and how to overcome them. Data Security and Privacy Issues The vast number of connected devices in an IoT network presents multiple entry points for potential cyber-attacks. In many cases, the sensitivity of the data collected makes such attacks very damaging. Additionally, the global nature of the cloud means that data could be stored in different geographical locations, each with its own set of privacy laws and regulations, making compliance a complex task. To address these issues, businesses must implement robust security measures at every level of the IoT network, from the devices to the cloud. These might include encryption, secure device authentication, firewalls, and intrusion detection systems. Additionally, organizations should ensure that their cloud service provider complies with all relevant privacy laws and regulations. Network Connectivity and Latency Another significant challenge in implementing IoT in the Cloud is ensuring reliable network connectivity and low latency. The performance of an IoT system heavily relies on the ability of devices to transmit data to the cloud quickly and reliably. However, issues such as weak signal strength, network congestion, or failures in the network infrastructure can lead to connectivity problems and high latency, impacting the performance of the IoT system. To overcome this challenge, businesses must invest in reliable and high-performance network infrastructure. This could include dedicated IoT networks, high-speed internet connections, and edge computing solutions that process data closer to the source, reducing latency. MQTT is a scalable, reliable protocol that is becoming a standard for connecting IoT devices to the cloud. Integration With Existing Systems Finally, integrating IoT devices and cloud platforms with existing systems and technologies can be a formidable task. This is due to the diversity of IoT devices and the lack of standardized protocols for communication and data exchange. As a result, businesses may face difficulties in ensuring that their IoT devices can effectively communicate with the cloud and other systems in their IT infrastructure. To address this, businesses should consider using middleware or IoT platforms that provide standardized protocols and APIs for communication and data exchange. Additionally, they could seek assistance from expert IoT consultants or systems integrators.
Monitoring Time-Series IoT Device Data Time-series data is crucial for IoT device monitoring and data visualization in industries such as agriculture, renewable energy, and meteorology. It enables trend analysis, anomaly detection, and predictive analytics, empowering businesses to optimize performance and make data-driven decisions. Thanks to technological advancements and the accessibility of open-source tools, gathering and analyzing data from IoT devices has become easier than ever before. In this tutorial, we will guide you through the process of setting up a monitoring system for IoT device data. We will use historical electricity consumption data from some European countries, captured at a 15-minute resolution. The data is sent to an MQTT-compatible message broker called Mosquitto, and then channeled to QuestDB through Telegraf, a highly efficient data collector. To visualize the results, we will connect Grafana to QuestDB using its Postgres interface. Since we don't have our own sensors to collect the data and send it directly to the broker, we will create the scenario by having a script that gathers the electricity consumption data from Open Power System Data and sends it to Mosquitto. Prerequisites Before delving into the system setup, please make sure that you have installed the following: Docker The latest Go version Setting Up the Prerequisites To get started, clone the prepared GitHub repository: git clone https://github.com/questdb/questdb-mock-power-sensor mock-sensor This repository contains all the configuration files and mock sensor scripts we will need during the tutorial. Furthermore, we are going to need a new Docker network named "tutorial" to enable communication between containers without installing additional software on the host system. To create the new network, execute the following command: docker network create tutorial Setting Up an MQTT Broker To enable communication between IoT devices and the monitoring system, we need a message broker such as Eclipse Mosquitto, which implements the lightweight MQTT protocol. By using Eclipse Mosquitto, we establish a reliable and efficient communication channel for IoT devices. To run Eclipse Mosquitto, we are going to use Docker. The default configuration for Mosquitto does not allow unauthenticated clients. However, no default credentials are set. To fix this, let's use the conf/mosquitto/mosquitto.conf to override the default settings: allow_anonymous true listener 1883 0.0.0.0 To start the Eclipse Mosquitto broker using Docker, you can execute the following command from the repository root: docker run --rm -dit -p 1883:1883 -p 9001:9001 \ -v "$(pwd)"/conf/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf \ --network=tutorial --name=mosquitto eclipse-mosquitto By running this command, the Eclipse Mosquitto broker will start within a Docker container, configured with the settings from the mosquitto.conf file. The specified ports will be exposed to enable MQTT communication and the container will be connected to the tutorial Docker network, allowing it to communicate with other containers in the network. Tunneling Data Into Mosquitto Now that the message broker is running, in a new terminal window, navigate to the repository root and start the script that will populate the data for us: cd script go get go run ./main.go Great! The script will continue running in the background until it is manually stopped or until it runs out of data to publish. Since we are using a historical dataset, the script will override the original timestamps with the current time when sending the records to the MQTT broker. With a 1-second delay between records, the script will consistently publish data to the broker, ensuring a steady flow of simulated sensor data for your tutorial. This allows you to progress with the rest of the tutorial without worrying about starting or stopping the script manually. Setting Up QuestDB To ensure that all incoming IoT data flowing through Mosquitto is stored in QuestDB by the end of this tutorial, let's start QuestDB in the background. This will allow us to connect to it later using Telegraf. Execute the following Docker command to start QuestDB: docker run --rm -dit -p 8812:8812 -p 9000:9000 \ -p 9009:9009 -e QDB_PG_READONLY_USER_ENABLED=true \ --network=tutorial --name=questdb questdb/questdb Once the command is executed, QuestDB will start running in the background. To validate that QuestDB is up and running, you can visit http://localhost:9000 in your web browser. By accessing the provided URL, you should be able to access the QuestDB web interface, indicating that QuestDB has been successfully started. Connecting the Dots and Adding Telegraf In this setup, Telegraf will play a key role in transferring data between Mosquitto and QuestDB. To enable this seamless data transfer, we will utilize QuestDB's ILP (Influx Line Protocol) interface, as ILP is capable of handling large volumes of data efficiently sent from Telegraf. To configure Telegraf, we are using the conf/telegraf/telegraf.conf file, which has the following content: # Configuration for Telegraf agent [agent] ## Default data collection interval for all inputs interval = "1s" [[inputs.mqtt_consumer]] servers = ["tcp://mosquitto:1883"] topics = ["sensor"] data_format = "influx" client_id = "telegraf" data_type = "string" # Write results to QuestDB [[outputs.socket_writer]] # Write metrics to a local QuestDB instance over TCP address = "tcp://questdb:9009" Let’s start the Telegraf container by executing: docker run --rm -it \ -v "$(pwd)"/conf/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf \ --network=tutorial --name=telegraf telegraf Once Telegraf is up and running, the sensor data is automatically tunneled into QuestDB, and a new table called "sensor" is created to store the incoming data. To verify that the data is successfully flowing into QuestDB, you can execute the following query on the QuestDB console, accessible at http://localhost:9000: SELECT * FROM sensor; In the next steps of the tutorial, we will set up Grafana, which is a powerful data visualization and monitoring tool. We will connect Grafana to QuestDB and create compelling visualizations based on the received sensor data. Visualizing the Dataset With Grafana To proceed with creating a Grafana container and connecting it to QuestDB, execute the following command: docker run --rm -dit -p 3000:3000 \ --network=tutorial --name=grafana grafana/grafana This command will start a Grafana container within the same tutorial network. To access the Grafana login page, navigate to http://localhost:3000 in your web browser. You will be prompted to log in using the default credentials: Username: admin Password: admin To connect Grafana with QuestDB and establish a data source, follow these steps: Open your web browser and go to http://localhost:3000/datasources/new. On the "New data source" page, select "PostgreSQL" as the data source type from the list of available options. Fill in the form with the following details: Name: QuestDB Host: questdb:8812 Database: qdb User: user Password: quest TLS/SSL mode: disable Once you have filled in the form with the correct information, click on the "Save & Test" button at the bottom of the page. Grafana will attempt to connect to QuestDB using the provided credentials and verify the connection. If everything is set up correctly, you should see a success message indicating that the data source was added successfully. Now that Grafana is successfully connected to QuestDB, you can begin creating insightful dashboards to visualize the electricity consumption data. Follow these steps to create a new dashboard: In your browser and navigate to the new dashboard page. Once you're on the new dashboard page, click on the "Add visualization" button. On the popup panel, select “QuestDB” data source. Now you're ready to start building your dashboard and visualizing the electricity consumption data from QuestDB. Grafana provides a wide range of visualization options, including graphs, charts, tables, and more. You can customize the panel settings, apply different visualization types, and configure data queries to create meaningful visual representations of the data. To visualize the electricity consumption data from QuestDB, we will use the "Time series" chart type, which is the default selected chart based on our data. The "Time series" chart is ideal for displaying data over time and is well-suited for analyzing trends and patterns. In the query builder panel, switch to “Code” and paste the following SQL query: SELECT date_trunc('minute', timestamp) AS minute, AVG(load_actual) AS avg_load_actual, AVG(load_forecast) AS avg_load_forecast FROM sensor WHERE country = 'NL' GROUP BY minute ORDER BY minute ASC By executing this query, we will obtain a result set that provides minutely aggregated data, showing the average, actual, and forecasted energy load for each minute. Feel free to adjust the sampling or grouping interval if you wish. If you followed this tutorial successfully, you should see similar results. To make the changes persistent while the container is running, rename the title of the graph on the right-hand side properties pane, and save it by clicking on the “Apply” button in the top right corner. The time series chart we have created, with the average actual and forecasted energy consumption, provides valuable insights into the accuracy of the forecasting logic. By analyzing this chart, we can easily identify any discrepancies between the actual and forecasted values, helping you pinpoint areas where improvements can be made. Cleaning Up Resources Upon finishing the tutorial, you may want to clean up any dangling resources, such as the tutorial network we created. Removing a network requires the removal of existing resources attached to that first. To remove the containers, the network, and the images used during this tutorial, run the following commands: docker ps -qa --filter="network=tutorial" | xargs -n1 docker kill docker network rm tutorial docker rmi telegraf eclipse-mosquitto grafana/grafana questdb/questdb Summary In this tutorial, we explored the setup and configuration of a monitoring system for IoT device data using MQTT, Telegraf, QuestDB, and Grafana. Through a series of steps, we established communication between IoT devices and the monitoring system using Eclipse-Mosquitto as the MQTT broker. We simulated data collection using a script that gathered electricity consumption from Open Power System Data and sent it to the MQTT broker. We stored this data in QuestDB and visualized it using Grafana. By connecting Grafana to QuestDB, we created informative dashboards to gain insights. The tutorial showcased the power of time-series data and its visualization, enabling us to compare actual and forecasted energy consumption.
Welcome to the first post in our new series exploring the world of MQTT broker clustering. If you're involved in the IoT (Internet of Things) space or have embarked on any journey involving real-time data transfer, you've probably come across MQTT (Message Queuing Telemetry Transport). MQTT is a lightweight, publish-subscribe network protocol that transports messages between devices, often known as the backbone for IoT. Today, we are going to introduce the key aspect of MQTT, one that's crucial for large-scale IoT deployments: MQTT broker clustering. This series is not merely a discourse on EMQX; instead, it's an attempt to comprehensively explore current MQTT technologies. We aim to provide insights, stimulate discussion, and, hopefully, ignite a spark of innovation in your journey with MQTT and IoT. So, stay tuned as we navigate the fascinating landscape of MQTT broker clustering. What Is MQTT Broker and Cluster? At the heart of MQTT's publish-subscribe protocol lies the MQTT broker, a central, critical component that handles the transmission of messages between the sender (publisher) and the receiver (subscriber). You may think of the broker as a post office; it accepts messages from various senders, sorts them, and ensures they reach the correct recipients. In the context of MQTT, publishers send messages (for example, sensor data or commands) to the broker, which then sorts these messages based on topics. Subscribers which have expressed interest in certain topics receive these sorted messages from the broker. This mechanism is what allows MQTT to efficiently handle real-time data communication, making it a go-to protocol for IoT applications. MQTT broker clustering, simply put, is a group of MQTT brokers working together to ensure continuity and high availability. If one broker goes down, others in the cluster are there to pick up the slack, ensuring there's no disruption in service. Clustering is crucial for businesses and services that cannot afford downtime. Why MQTT Broker Clustering? Imagine you have thousands, if not millions, of IoT devices connected to a single MQTT broker, and it crashes or becomes unavailable. All those devices lose their connection, disrupting data flow and potentially leading to significant losses. By implementing a broker cluster, you spread the load, reduce the risk of such a catastrophe, and ensure scalability for future growth. At a very high level, below are the benefits of MQTT broker clustering. Scalability: One of the key advantages of MQTT broker clustering is its ability to easily scale up to accommodate growth. As the number of connected devices or the volume of data increases in your IoT network, you can add more brokers to the cluster to handle the additional load. This allows your system to expand smoothly and efficiently without overburdening a single broker or compromising system performance. High Availability: High availability is crucial for many IoT applications where constant data flow is essential. In a clustered setup, if one broker goes down, the others in the cluster continue to operate, ensuring uninterrupted service. This redundancy mitigates the risk of a single point of failure, providing a more robust and reliable network for your IoT devices. Load Balancing: With the help of DNS resolutions or load-balancers, MQTT broker clusters can be deployed to distribute the load among all brokers in the cluster. This prevents any single broker from becoming a performance bottleneck. By sharing the load, each broker can operate more efficiently, leading to improved overall performance and responsiveness. This is particularly beneficial in scenarios with a high volume of messages or a large number of connected devices. Centralized Management: Clustering allows for centralized management of brokers, simplifying administration tasks. Instead of dealing with each broker individually, changes can be made across the cluster from a single point, saving time and reducing the likelihood of errors. This centralized approach also provides a comprehensive view of the system's performance, aiding in monitoring, debugging, and optimizing the network's performance. Maintenance Flexibility: With a single broker, taking down the system for maintenance can cause service disruption. However, with a cluster, you can perform maintenance or upgrades on individual nodes without disrupting the overall service. What Will Be Explored in This Series? As we embark on this series, our aim is to probe the depths of MQTT broker clustering together, traversing from the foundational concepts to the intricacies that characterize advanced implementations. We invite you, our readers, to join in this exploration, fostering a collaborative environment for engaging discussions, shared learning, and mutual growth in understanding these technologies. Here's a brief overview of what you can expect: Defining Clustering: We'll kick off by digging deeper into what clustering truly means. While the basic definition of clustering may sound straightforward, it becomes more nuanced as we get into the details. For instance, does mirroring all messages between two MQTT brokers constitute a cluster? We'll strive to provide a clearer definition of a cluster, discussing the challenges and complexities that come with it. Implementing MQTT Broker Clusters: There are countless ways to implement a cluster, each with its own pros and cons. In this part of the series, we'll explore some popular approaches to implementing MQTT broker clusters, analyzing their strengths and weaknesses. Scalability in MQTT Broker Clusters: This discussion will be an extension of the second part, focusing specifically on scalability. As the size of the cluster grows, new challenges arise, and different clustering strategies may have varied implications. We'll discuss the challenges and potential solutions. Fault Tolerance: Failures are inevitable in any system, and a robust MQTT broker cluster should be prepared to handle them gracefully. In this section, we'll discuss common types of failures in a cluster and how cluster members can recover from such disruptions. Operability and Management: Centralized management of MQTT broker clusters can be a significant advantage, but it comes with its own set of challenges. Whether a cluster comprises homogenous or heterogeneous nodes can greatly impact operational requirements. We'll explore these challenges and discuss potential solutions, considering different contexts like self-hosted IoT platforms or middleware vendors. Wrapping Up Whether you're looking to understand the basics or seeking to navigate the complexities of MQTT broker clustering, this series promises to be an enlightening journey. Stay tuned as we dive into these fascinating topics, one post at a time.
The Internet of Things (IoT) is a massive system of interconnected devices communicating and exchanging data through IoT protocols and standards. It depends on data gathering and analysis to perform and improve its capabilities. To make this possible, the IoT relies on sensors and actuators to measure various parameters related to a device's specific functions. Businesses can benefit heavily from automation processes as they can handle many operations that require additional manual labor. They can also enjoy reduced costs and improved overall efficiency. However, learning how to calibrate a sensor is critical to achieving these benefits. Using Sensors in the IoT IoT protocols and standards are only as good as the sensors that obtain and transmit information to make it work. A sensor's number one performance index is accuracy. It should be able to gather accurate data to provide actionable information for analysis and process implementation. Whether they’re incremental or radical changes, a sensor must be able to detect both with remarkable accuracy to be effective. But, like in other applications, this might not always be true. Here are some factors that affect a sensor's performance concerning accuracy: Production errors: Nature's randomness plays a role in the performance of sensors in more ways than one. Even with strict production standards and quality control, it is impossible to produce sensors of the same quality even if they come from the same batch or were manufactured using the same process. Age of sensors: Human eyes can be considered highly sophisticated sensors that change as they age. The same can be said about the sensors the Internet of Things uses. Their accuracy and performance degrade as they get older. Mechanical damage: Wear and tear can affect a sensor's performance the more it is used. Exposure to the elements and other environmental changes can also contribute to its degradation. Improper reference: Sensors must have a correct reference to produce accurate readings. A slight shift in the connections may result in incorrect readings. Some factors that affect references are ambient conditions, temperature, and time. System noise: Noise can be unwanted or irrelevant random variations or meaningless information affecting a sensor's readings. Internal or external factors like electrical, thermal and environmental noise, and even noise from within the sensor can cause this. How to Calibrate a Sensor There are multiple steps in sensor calibration and it's best to leave the task to trained professionals. The IoT comprises interconnected systems of sensors requiring specific calibration techniques and methods. It's a complex process that may require the following steps: Get measurements: Engineers test the current values using certified instruments. They may use different testing methods based on national or international standards for calibration. Refer to ideal values: Every piece of equipment must have a benchmark for optimum performance and a margin for deviation. Engineers use these ideal values to proceed with the following step. Determine the deviation: Once the engineers see the current values, they can compare them against the ideal readings and recommend adjustments to the equipment. Adjust the calibration: The engineers then adjust the sensor based on their findings — making up for losses or lowering the values for the gains — and return the equipment to optimal performance. Report the results: All the data from the calibration are then recorded for future reference. Engineers can base their following maintenance and calibration on the figures from their previous checkups. Sensor calibration is the process of finding out the gains and biases within a system. It is also used to increase a system's performance and functionality. Benefits of Sensor Calibration Considering all these factors, it's vital to calibrate sensors to get reliable information. Sensors are imperfect even when fresh from the factory, and will fail sooner or later. The question is “when” instead of “if.” Calibration can remedy a sensor's issues and keep the system in tip-top shape. Here are five benefits of sensor calibration. 1. Easy Data Gathering Systems with calibrated sensors are easier to read, leading to efficient data gathering. The IoT relies on data provided by sensors to improve processes by applying relevant solutions. When a system shows unreliable data, something might be wrong with the system itself. 2. Accurate Results Faulty sensors can be misleading. They are unreliable at best and can cause damages if left unchecked. Sensor calibration will yield accurate results, and ensure safety, functionality and system reliability. Imagine a smart home that shows inaccurate readings like water pressure and climate control. Professionals who know how to calibrate a sensor solve these problems and make better living conditions for homeowners. 3. Improved Equipment Reliability System maintenance is a prevalent concern for the IoT as it works 24/7, all year round. Sensors should be regularly checked and maintained to ensure they're working correctly. Periodic calibration can pinpoint sensors that need replacement or fine-tuning. Reliable equipment can serve as the benchmark for future testing and maintenance. Sensor calibration can help improve the system and prevent unnecessary loss and premature equipment degradation. 4. Better Data Analysis Sensors obtain and transmit data to the cloud for storage and analysis. Ensuring calibration will provide reliable data vital for research, and other scientific and practical applications. Data scientists can draw reliable information from the data in the cloud — such as user preference, behavior, patterns, and other statistics — to better gauge a particular system's efficiency. 5. Shorter Testing Time Checking an expansive network of sensors takes time and skilled labor. Sensor calibration ensures all components within a system are working as they should. This leads to shorter testing times and more room for other relevant activities like quality control and product development. The Future and the Internet of Things Diving deep into IoT protocols and standards opens up many possibilities for those interested in improving their growth and business potential. Whether for boosting manufacturing output or improving farming best practices, IoT sensors can help in many ways. Companies should look more into how to calibrate a sensor to understand their data better and figure out how to use it to improve their potential. Doing so will take time and effort, but the benefits will be worth it.
Frank Delporte
Java Developer - Technical Writer,
CodeWriter.be
Tim Spann
Principal Developer Advocate,
Cloudera
Carsten Rhod Gregersen
Founder, CEO,
Nabto
Emily Newton
Editor-in-Chief,
Revolutionized