DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
11 Monitoring and Observability Tools for 2023
Learn more

IoT

IoT, or the Internet of Things, is a technological field that makes it possible for users to connect devices and systems and exchange data over the internet. Through DZone's IoT resources, you'll learn about smart devices, sensors, networks, edge computing, and many other technologies — including those that are now part of the average person's daily life.

icon
Latest Refcards and Trend Reports
Trend Report
Edge Computing and IoT
Edge Computing and IoT
Refcard #214
MQTT Essentials
MQTT Essentials
Refcard #263
Messaging and Data Infrastructure for IoT
Messaging and Data Infrastructure for IoT

DZone's Featured IoT Resources

Data Management for Industrial IoT
Refcard #367

Data Management for Industrial IoT

Mobile Database Essentials
Refcard #386

Mobile Database Essentials

UUID: Coordination-Free Unique Keys
UUID: Coordination-Free Unique Keys
By Jaromir Hamala
How To Test IoT Security
How To Test IoT Security
By Anna Smith
What Is IoT Gateway? Is It Important
What Is IoT Gateway? Is It Important
By Paridhi Dhamani
Are Industrial IoT Attacks Posing a Severe Threat to Businesses?
Are Industrial IoT Attacks Posing a Severe Threat to Businesses?

What is the Industrial Internet of Things (IIoT)? IIoT refers to using interconnected devices, sensors, and machines in industrial settings. These devices can monitor and analyze data from various systems, giving businesses real-time insights into their operations. For example, a factory might have IIoT sensors installed throughout its assembly lines. Each sensor will collect information about what's happening in that factory area, such as temperature levels or product quality. This information is then collected by a server (or “hub”) that aggregates the data from each sensor and displays it on an interactive map for easy viewing. This allows factory managers to understand better what's happening at each stage of production — and when something goes wrong — so they can respond quickly and effectively. IIoT has the potential to revolutionize various industries, including manufacturing, transportation, and energy, by making operations more efficient, reducing downtime, and improving product quality. What Are IIoT Attacks? IIoT attacks are malicious activities aimed at disrupting, damaging, or taking control of IIoT systems. These attacks can be carried out by hackers, cybercriminals, or even disgruntled employees. The main goal of these attacks is to cause damage to the systems, steal sensitive data, or compromise the business's operations. Some common types of IIoT attacks include: Ransomware: This type of attack involves using malware to encrypt the data on the IIoT devices, making it inaccessible to the business until a ransom is paid. Distributed Denial of Service (DDoS): DDoS attacks overwhelm the IIoT systems with a flood of traffic, rendering them unusable. This attack makes an online service, network resource, or machine unavailable to its intended users. Man-in-the-Middle (MITM) Attack: This type of attack involves intercepting the communication between IIoT devices and altering it to gain access to sensitive data or take control of the systems. Malware: Malware can infect IIoT devices, enabling attackers to steal data, take control of the systems, or cause damage. Physical Attacks: Attackers can physically access IIoT devices and systems to steal, modify, or destroy them. Why Are IIoT Attacks a Severe Threat to Businesses? IIoT attacks pose a severe threat to businesses that rely on these systems. The consequences of an IIoT attack can be severe and long-lasting. IIoT attacks can impact enterprises in several ways, including: Financial Loss: An IIoT attack can lead to significant financial losses for businesses, including lost revenue, damage to equipment, and the cost of remediation. Reputation Damage: If a business suffers an IIoT attack, its reputation may be severely damaged, losing customers and trust. Regulatory Compliance: Many industries have regulatory compliance requirements that businesses must meet. An IIoT attack can result in a breach of these regulations, leading to penalties and fines. Safety Concerns: In some cases, IIoT attacks can have severe safety implications, such as disrupting critical infrastructure or systems essential for public safety. Intellectual Property Theft: Businesses that rely on IIoT systems may have valuable intellectual property stored on those systems. An IIoT attack can result in the theft of this intellectual property, compromising the competitiveness of the business. How Can Businesses Protect Themselves from IIoT Attacks? Businesses can take several steps to protect themselves from IIoT attacks. Some best practices include: Develop a Cybersecurity Plan: A cybersecurity plan should be developed that takes into account the unique risks associated with IIoT. This plan should identify potential threats and risks, assess vulnerabilities, and outline appropriate responses. Conduct Regular Risk Assessments: Regular risk assessments are necessary to identify vulnerabilities in the IIoT environment. The assessments should include identifying weaknesses in hardware and software, identifying potential attack vectors, and evaluating the effectiveness of existing security measures. Implement Appropriate Access Controls: Access to IIoT systems should be limited to authorized personnel. This can be achieved through robust authentication mechanisms, such as multi-factor authentication, and by restricting access to sensitive data and systems on a need-to-know basis. Use Secure Communication Protocols: IIoT devices should use secure communication protocols, such as SSL/TLS, to ensure that data is transmitted securely. Devices should also be configured only to accept communications from authorized sources. Implement Security Measures at the Edge: Edge computing can help secure IIoT systems by allowing security measures to be implemented closer to the data source. This can include using firewalls, intrusion detection systems, and antivirus software. Ensure Software and Firmware is Up-to-Date: Keeping software and firmware up-to-date is essential to ensure that known vulnerabilities are addressed. This includes not just IIoT devices themselves but also any supporting software and infrastructure. Implement Appropriate Physical Security Measures: Physical security measures, such as access control and monitoring, should be implemented to protect IIoT devices from physical tampering. Develop an Incident Response Plan: An incident response plan should be developed to ensure appropriate action is taken during an IIoT attack. This plan should outline steps to be taken to minimize damage, contain the attack, and restore normal operations. Provide Employee Training: Employees should be trained on the risks associated with IIoT and how to recognize and respond to potential threats. This includes educating employees on best practices for secure passwords, safe browsing habits, and identifying suspicious activity. To Conclude The rapid adoption of industrial IoT has increased efficiency but has eventually created a broadened threat vector in the IoT landscape. Protecting against IIoT attacks requires a multi-faceted approach that includes strong access controls, secure communication protocols, regular risk assessments, and a comprehensive incident response plan. By taking these steps, businesses can minimize the risks associated with IIoT and protect themselves from potentially devastating consequences.

By Deepak Gupta
Five Arguments for Why Microsoft Azure Is the Best Option for Running Industrial IoT Solutions
Five Arguments for Why Microsoft Azure Is the Best Option for Running Industrial IoT Solutions

The current technological landscape demands digital transformation, and the industrial internet of things (IoT) is undoubtedly one of the best alternatives for that. Industrial IoT solutions leverage connected sensors, actuators, and other smart devices to monitor, track, and analyze available data and make the best use of it for enhanced efficiency and minimized costs. However, it leaves us with an important question — Which cloud computing platform is the best option for running industrial IoT solutions? Well, according to research by Statista, over 70% of organizations are using Microsoft Azure for their cloud services. Moreover, Gartner has recently reported that Microsoft Azure is one of the key leading players out of the 16 top global companies considered for industrial IoT platforms. With these stats and facts, you probably got the answer. However, you might be wondering what makes Microsoft Azure a popular choice over others. This blog will outline the five significant reasons why Azure is the preferred cloud IoT option in the industrial IoT infrastructure. Moreover, you will also learn how it can benefit development teams to optimize their operational efficiencies. Microsoft Azure as a Cloud IoT Platform: An Overview Launched in 2010, Microsoft Azure is one of the three leading private and public cloud computing platforms worldwide. Although Azure was founded comparatively later, its intriguing features have made it a strong contending faction in the AWS vs. Azure vs. Google Cloud debate. Moreover, the global revenue growth of Microsoft Azure stood at 40% in the last quarter of 2022. Also, its total revenue in terms of public cloud platform as a service (PaaS) was $111 billion till last year. It’s the versatility of this cloud IoT platform that attracts the eye of software developers, engineers, and designers. IoT solutions by Azure cover almost every aspect of industrial IoT development, from linking devices and systems to providing decision-makers with valuable insights. The following section highlights some of the benefits of Microsoft Azure cloud IoT solutions. Benefits of Azure Cloud IoT 1. Simplicity and Convenience One of the best things about the products from Microsoft is that they are convenient for all types of users, irrespective of their skills. From integrating app templates to leveraging SDKs, everything requires minimal coding. In addition, the platform provides users with several shortcuts for easy wireframing, prototyping, and deployment. 2. Robust Network of Partners Just like Amazon Web Services, Microsoft Azure has an ever-growing list of globally acclaimed IoT partners. These include a vast community of software developers and IoT hardware manufacturers. 3. Interactive Service Integrations In the Azure IoT Central, one of the core IoT solutions of Microsoft Azure, you will find a plethora of fascinating tools and services. For instance, with the help of Accuweather, you can get insights in the form of weather intelligence reports. Similarly, developers can build a virtual representation of their physical IoT environment with Azure Digital Twins. This feature can also help identify the dependencies and correlations between different parts of the environment. 4. Top-Notch Security Keeping cybersecurity threats in mind, Microsoft has focused specifically on the security aspects of all its products and services. Each Azure cloud IoT service is equipped with its own security features to help protect the data and prevent the code files from getting infected with viruses. Reasons Why Microsoft Azure Is the Best Option for Running Industrial IoT Solutions Azure IoT Central — A Robust SaaS Platform Although developers can easily develop end-to-end IoT products using the basic Microsoft Azure services, the process of plumbing can feel complex anyway. In such cases, Azure IoT Central can be an ideal solution to link your existing devices and manage them using the cloud. With this, you wouldn’t need to build a custom solution. Azure IoT Central is a SaaS product that extracts Azure’s fundamental IoT PaaS capabilities, making it convenient for you to procure value from the linked devices. Besides this, the public-facing APIs help provide a seamless user experience throughout the development process; be it while creating dashboards or connecting IoT devices. A Rich and Vibrant Partner Ecosystem One crucial thing that needs to be understood to create a successful IoT solution is that it’s not just about writing code and deploying software. Instead, it’s more about how efficiently it could manage the devices and analyze data. For this, you need a professional system integration team that can pick the right hardware and incorporate it with legacy OT technologies. Microsoft Azure cloud IoT solutions provide users with a massive range of software and hardware offerings. For instance, consider one of its ranges of products, say, Azure Stack Edge. The developers can choose from its robust, battery-powered device or standard server-grade alternative fueled by 32vCPUs, 204 gigabytes RAM, 2.5 terabytes local storage, and 2 NVIDIA A2 GPUs. This is one of the reasons why several popular industrial IoT players, like Schneider, ABB, PTC, Siemens, etc., have developed their platforms over the Microsoft cloud. All these examples show that Microsoft Azure has a rich and vibrant partner ecosystem, delivering intuitive industrial IoT solutions. A DevOps-Friendly Platform The role of edge computing is quite significant in developing industrial IoT solutions. Azure IoT Edge, a robust edge computing platform, performs that function well, making developers and system operators more efficient. Azure IoT Edge can run on both AMD64 and ARM64 platforms, and one can use it to form a channel between the public cloud and local devices. Moreover, business logic can be written by developers as standard Docker containers, which can then be installed as modules in Azure IoT Edge. Also, operators can incorporate Kubernetes, an open-source tool for automating management and installation with Azure IoT Edge to constantly monitor the deployments. An Ultimate Level of Security As discussed earlier, Microsoft has invested both time and money in enhancing the security aspects of all its products and services. Here are some of the security services you can avail of with Microsoft Azure: Azure Sphere It is the one-stop solution for users seeking protection for cloud-to-edge and edge-to-cloud integration. Since the device is securely integrated with Azure IoT Central and Azure IoT Hub, users can easily and quickly build secure connected solutions. Azure Defender for IoT Users can use this solution to get end-to-end security for IoT devices. Azure Defender for IoT leverages features such as behavioral analytics and threat intelligence to constantly monitor IoT devices for unauthorized and unwanted activities. Easy Integration with AI and Data Analytics To make an IoT solution more functional and efficient, it is crucial to integrate it with advanced technologies, like artificial intelligence, machine learning, or big data analytics. Incorporating all these technologies makes the processes much simpler and saves both effort and time for developers. With Azure Stream Analytics, developers can quickly store and process the local telemetry data. Moreover, Azure Data Lake or Azure Cosmos Database can also be used to store data ingested from sensors. It can then be passed through Azure ML and Power BI in order to perform predictive analytics and form predictive models derived from this data. Wrapping Up Several experts describe Microsoft Azure as an IoT cloud platform with ‘limitless potential’ and ‘unlimited possibilities,’ and now you probably know the reason why. In fact, it is reported that observing the rapid growth of Azure, the day is not far when it will surpass the dominance of Amazon Web Services (AWS). Microsoft Azure has got everything in the package, and that includes services for data management and analysis, too. Its intriguing features and cloud computing capabilities can help developers construct an industrial IoT environment in the best way possible.

By Shikhar Deopa
Key Characteristics of Web 3.0 Every User Must Know
Key Characteristics of Web 3.0 Every User Must Know

Web 3.0 is indeed the future version of the present-day Internet which will be purely based on public blockchains. Public blockchains refer to a record-keeping system known for carrying out crypto transactions. Unlike its predecessors, the key feature of Web 3.0 is its decentralized mechanism, translating to users using the Internet via services governed by major tech players, individuals, and users. The users will also get the privilege of controlling various parts of the Internet. Web 3.0 doesn't necessarily demand any form of "permissions," meaning that the governing bodies have no role to play in deciding the Internet service accessibility, nor is any "trust" required. Hence no intermediatory body is not necessary to carry out virtual transactions amongst different involved parties. Since these online agencies are involved in most of the data collection part, Web 3.0 will protect user privacy in a better manner. Decentralized Finance, or DeFi, is an integral component of Web 3.0 and has gained significant traction recently. It involves executing real-world financial transactions over blockchain technology without any assistance from banks or the government. Also, larger enterprises across different industries are now investing in Web 3.0, and this hasn't been easy to consider that their engagement won't be driving results in some centralized authority form. What Is Web 3.0? Web 3.0, also called the Semantic Web or read-write-execute, is the web era starting from 2010 that mentions the future of the web. Technologies like Artificial Intelligence and Machine Learning allow user systems to analyze data the same way as humans, which assists in the smart generation and distribution of important content per the user's needs. There are a lot of differences between Web 2.0 and Web 3.0. with decentralization present at the core of both web versions. Web 3.0 developers do not always create and deploy applications running over a single server, or the data remains stored in just one database (hosted on and managed by a cloud service provider). Rather, applications based on Web 3.0 are developed on blockchains, decentralized networks of multiple servers, or a hybrid of these two (blockchain and servers). These programs are also called Decentralized Apps or DApps. In the Web 3.0 ecosystem, network participants or developers are recognized and awarded for delivering the best services toward creating a stable and secure decentralized network. Benefits of Web 3.0 Over Predecessors Since in Web 3.0, there are no intermediaries involved, there is no longer control over the user data. This also eliminates the possibilities of government/corporate restrictions and damages from denial-of-service or DoS attacks. Compared to the previous web versions, searching for accurately-refined results over search engines has proved challenging. However, search engines have significantly transformed their strengths to discover semantically relevant results based on users' search intent and information. This has made web browsing a more convenient option than before, allowing users to get the specific piece of information they need easily. Customer service has also been important for driving positive user experience on websites and web applications. Leading successful web-driven organizations to find it difficult to upscale their customer operations due to high expenditures. Users can get a better experience while engaging with support teams using AI-driven chatbots that can 'talk' to multiple customers simultaneously, backed by the emergence of Web 3.0 with the use of artificial intelligence and machine learning technologies. Significant Characteristics of Web 3.0 The transition to Web 3.0 is taking place at a very slow pace and might get unnoticed by the general web audience. Web 3.0 applications have a strong resemblance in terms of look and feel with Web 2.0 applications; however, their back end differs fundamentally. The future of Web 3.0 is headed towards universal applications that can be easily read and used by multiple devices and software types, making the end user's commercial activities better with seamless experiences. Decentralization of data and establishment of transparent and secure environments are going to emerge with the advent of next-gen technologies like distributed ledgers and blockchain, which will dissolve Web 2.0's centralized surveillance and bombarded advertisements. In a decentralized web-like Web 3.0, individual users get complete control of their data, where a decentralized infrastructure and application platforms are displacing the centralized tech-based organizations. The following are some major properties of Web 3.0 in order to determine the associated complexities and intricacies linked with this emerging web version. Semantic Web The concept of the Semantic Web is a very critical element of Web 3.0, which was coined by the legend Tim Berners-Lee for describing a web of data that machines can analyze. In layman's language, the syntax of two phrases can differ, by their semantics remain similar, and semantics is more centered around the emotion depicted through facts. A couple of cornerstones are linked with Web 3.0: semantic web and artificial intelligence. The semantic web will help computer systems understand what data means, while AI will assist in creating real-world use cases that can enhance data use. The primary concept is to create a knowledge loop across the Internet that will support understanding the words and then generating, sharing, and connecting content via search and analytics tools. Web 3.0 will boost data communications owing to the semantic metadata. As a result, the user experience gets elevated to higher levels of connectivity, benefiting from the real data that can be easily accessed. Artificial Intelligence Owing to artificial intelligence technology, websites can now filter out and provide the best facts. In the present web era of Web 2.0, enterprises have started soliciting customer feedback for an enhanced understanding of the product or service quality. One of the major contributors to this present-day web is peer reviews. However, these human recommendations and thoughts can get opinionated or biased towards a particular service. Various AI models are now being trained to differentiate between good and bad data and offer suggestions backed with relevant and accurate information. Ubiquitous The ubiquitous characteristic of Web 3.0 is seen as the concept of existing or being present simultaneously; however, this feature is already present in Web 2.0 as well. For instance, on social media platforms where users share their photos and online with everyone online. This makes the sharer the intellectual property owner of the media he has shared. Once it is shared online, the photo becomes available everywhere, making it ubiquitous. With the increased number of mobile devices and Internet penetration across these devices, Web 3.0 remains accessible from anywhere, anytime. Unlike the previous web versions, the Internet won't be restricted to desktops or smartphones. With everything around us getting interconnected in a digital ecosystem called the Internet of Things, Web 3.0 is seen as the web of everything and everywhere. 3D Graphics Web 3.0 will impact the future of the Internet, as there'll be a transition from a two-dimensional web to a more realistic three-dimensional digital world. Services and websites of Web 3.0, like eCommerce, online games, and the real estate markets, are some sectors that will be extensively using three-dimensional designing.

By Rishabh Sinha
Get Started On Celo With Infura RPC Endpoints
Get Started On Celo With Infura RPC Endpoints

Onboarding the next wave of users to Web3 is a massive undertaking that many projects in the ecosystem are building for. One project with a unique approach to this is Celo, a layer-one blockchain network. Celo gives a superior new-user experience by being a mobile-first layer-1 blockchain that is easy to use with just a mobile phone. Your phone number acts as your address rather than a complex string, and the network allows users the option to pay gas fees with other tokens than the native currency. However, the user experience is just one side of the onboarding coin. Developer experience is the other. After all, a new network is just as good as the RPCs that let you use it. Only some developers have the resources to run a node. Infura, one of the most popular Web3 node providers, now offers Celo Network RPC nodes to all users. So if you want to start building on this mobile-first network, there’s never been a better time. Before you start building, let’s learn more about Celo. This article will provide a high-level overview of the Celo Blockchain Network and how you can start building on it using Infura. What Is Celo? Celo is a high-throughput layer-1 network that focuses on mobile users. Mapping Phone Numbers to Public Keys Celo is easier for mobile phone users than other networks. Celo maps phone numbers to public keys, allowing users to send tokens to people who don’t have wallets. A decentralized attestation protocol does the mapping and links an account to a phone number. This service never receives the phone number in clear text to maintain privacy. How Celo's attestation protocol works - Image from celo.org As a result, user experience is better than most blockchains, as all interactions are done through phone numbers, rather than 30+ character-long strings that are easy to make mistakes with and impossible to memorize. Paying Gas Fees With ERC-20 Tokens Another usability hurdle is that most networks require users to pay gas fees with a native token. This results in users exchanging other tokens for native ones just to be able to send transactions. This is a problem for two reasons. First, it adds a non-trivial step to every transaction if the user doesn’t have enough native tokens. Second, exchanging tokens is taxable in some countries, so they need to keep track of each time they exchange to a native token just to cover gas fees. With Celo, you can pay with any approved ERC-20 token currently available, even stablecoins, lowering yet another barrier to entry and making costs more predictable. However, there is one caveat: transactions paid with non-CELO gas currencies will cost roughly 50k additional gas. It’s also important to note that there is a governable list of accepted currencies. When developing, Celo comes with a dApp SDK called ContractKit. This SDK is a suite of packages that make building on Celo more straightforward. Connect, one of ContractKit’s main packages, acts as a wrapper around web3.js that handles the different currencies to pay the fees. You can set your preferred currency as default for all transactions, like in this example: JavaScript import { CeloContract } from "@celo/contractkit" const accounts = await kit.web3.eth.getAccounts() kit.defaultAccount = accounts[0] await kit.setFeeCurrency(CeloContract.StableToken) With this in your code, you are setting the default currency if the feeCurrency field is left blank when sending a transaction. The user can still select another currency to use. ContractKit comes with a list of contract addresses that include all core Celo currencies. In the example, CeloContract.StableToken refers to cUSD. It’s also possible to set your preferred currency per transaction. In this example, we send cUSD and also pay with cUSD. JavaScript const contract = await kit.contracts.getStableToken() await contract.transfer(recipientAddress, amount) .send({ feeCurrency: contract.address }) Celo’s virtual machine is also EVM compatible since it originated as a fork of Geth. This compatibility enables you to reuse most of your Solidity skills when deploying your smart contracts on Celo. However, there are some notable differences. The first difference is that transaction objects have additional fields like feeCurrency, gatewayFee, and gatewayFeeRecipient. They provide full-node incentives and allow users to pay their gas fees with different tokens. This doesn’t affect you when porting smart contracts from Ethereum to Celo, but it could be an issue when porting from Celo to Ethereum. The second difference could have implications for your Ethereum-based smart contracts. The DIFFICULTY and GASLIMIT opcodes aren’t supported, and the fields are also missing from block headers. A third difference is that the key derivation path is m/44'/52752'/0'/0 and not m/44'/60'/0'/0 like in Ethereum. Essentially, this derivation path allows wallets to generate different keys from one seed phrase. The Network Is Carbon Negative CO2 production by blockchain networks has been a huge talking point in the last few years. Coming from Bitcoin, many of the early networks used the Proof-of-Work consensus algorithm to eliminate Sybil attacks. The Celo protocol uses BFT Proof-of-Stake, which minimizes energy usage of the network by over 90%. Plus, it can create a new block in five seconds, less than half the time Ethereum needs. Furthermore, all blocks are finalized immediately, so you and your users don’t have to wait for their actions to be written on-chain. All this optimization still produces CO2, so Celo uses projects like Wren, a carbon offset subscription service that offsets 65.7 tons of CO2 monthly to get carbon negative. With tech-funded rainforest protection, Celo has already saved over 30,000 tons of CO2. Why Use Celo With Infura? Infura offers free RPCs for prominent wallets, and many big Web3 projects use them as their RPC provider, including Brave, Uniswap, Compound, Gnosis, and Maker, just to name a few. Additionally, Infura has achieved 99.99% uptime and about 10 times faster response times than other service providers like Alchemy or Quicknode. ConsenSys, the company behind Infura also created and maintains crucial Web3 projects such as MetaMask and the Truffle Suite. So the shared know-how of creating wallets, dev tools, and RPCs creates synergies you won’t get from any other RPC provider. This also means you get trusted and complementary end-to-end tooling from the ConsenSys suite of products that flawlessly integrate with Infura RPCs. With the release of Celo RPCs, Infura now supports 10 different networks, so you can build on multiple chains at once. Best of all, it’s free to access these networks and their archived data! Summary Celo is an exciting chain that tackles Web3 user and developer experience pain points with innovative solutions. With its mobile-first approach, users can interact with the network and receive tokens with their phone number rather than a crypto wallet, making onboarding to the network easier for Web3 newcomers. With the option to pay gas fees with other tokens rather than the native currency, Celo also removed a massive hurdle in the daily use of a blockchain network. Other networks require fees paid with a native and potentially volatile token. Now that Infura offers RPC nodes for the Celo network, it’s the perfect time to start building on this mobile-first blockchain network. For more information, check out Infura’s docs.

By Paul McAviney CORE
ANEW Can Help With Road Safety
ANEW Can Help With Road Safety

Prelude Self-driving cars can change everything in terms of road safety and mobility. Self-driving vehicles are capable of sensing their immediate environment and can move safely with little or no human input. With self-driving cars, real-time alerting systems act as a communication between vehicle and driver. Real-time signaling and alerting have many tangible and intangible benefits. XYZ’s “Autopilot and Full Self-driving capability” has been getting better every year since its introduction. XYZ's patent to “Automate Turn Signals” is an advanced step in enhancing road safety, not only for self-driving cars but also for drivers who ignore or forget to use turn signals. There is always a question, how independent should a vehicle be in making smart decisions? Self-driving cars should be just as intelligent as the driver in making the right decisions. Autopilot consists of eight external cameras, radar, 12 ultrasonic sensors, and a powerful onboard computer to guide for a safe journey. What is the role of tires? Smart tires, in real-time signaling and alerting, are the only things to touch the ground, and their movement is key in changing lines and turns. Automatic turn signals are dependent on a steering angle data source with respect to ultrasonic data sensors. A small percentage of car manufacturers can provide these additional safety measures for auto signaling and alerting, as it would be very complex and expensive for every manufacturer to come forward with these kinds of developments. Smart tires can play a key role in this kind of development to provide additional safety and be cost-effective. Smart tires provide not only automatic signaling features but also help to detect misconstructed roads and avoid fatal toppling. In this article, we will introduce a smart tire and explain how this will help tire manufacturing companies design a set of sustainable solutions to ward off various road mishaps, output an intercept to identify an overtaking vehicle, and to signal a quick estimate of all forms of rough terrains (wrongly designed angle of banks, misconstructed roads) to keep especially heavy vehicles from fatal toppling. In this pursuit, we will affix a tire internally with well-calibrated and cost-effective, non-cumbersome mechanical semi-micro tools such as Magnetometer, which is a compass, and Gyroscope, which both work together to help input an edge computing instrument to output a quick intercept to the driver. Smart tires in the tire industry give huge insights into driving analytics and much more real-time analysis. TPMS — Tire Pressure Monitoring systems are definitely an additional safety for vehicles and drivers. However, is this the only information that can be made use of from tire data? There is abundant information available from tires that can be used to generate more safety for both vehicle and driver. Safety through smart tires is a cost-effective solution that can be helpful to the majority of drivers instead of focusing only on self-driving cars, which will cover only a small percentage of cars in use. Smart tires' real-time alerting and signaling can be more effective than depending on vehicle dynamics, as tires are the only thing in contact with the ground; tire parameters can play a key role in drive analytics and help in avoiding major road mishaps. Automatic turn signals, which are dependent on steering angle data source, might be inaccurate when a vehicle takes a turn at a lean angle. This kind of alerting can be more accurate if we source data from tires instead of steering angles. A well-calibrated digital compass from Accelerometer, Gyroscope, and Magnetometer using sensor fusion methodologies can give accurate data. The three major challenges that have to be monitored and controlled in major accidents are overtaking vehicles, angle of banking, and fatal toppling due to misconstructed roads. MEMS are Microelectromechanical systems or micro machines which were made up of components of size between 0.001 to 0.1 mm. They are made up of a central unit that processes data and multiple components that interact with microsensors. Using a MEMS accelerometer, gyroscope, and magnetometer, we can create an application of a digital compass that sources data from these microsensors. In this article, we will see how we can model a device that can be affixed internally in a tire with good calibration, thereby resulting in a digital compass based on tire movement. This device, the ANEW-Angular Navigation Early Warning device, can help to control below three major road mishaps. ANEW Architecture Firstly, we can look at below designed architecture of the ANEW device and process flow. Data that is recorded through microsensors is processed with an algorithmic model to reduce noise from sensors or stochastic errors due to nonlinearity; this will result in an accurate digital compass that can provide real-time alerting. The two main segments in the entire architecture are the Sensors & Optimal estimate algorithm that have been used. These two play key roles in this product development. We will first see how to calibrate these multiple microsensors. We are using a GY-80 multi-sensor board, which comprises of accelerometer, gyroscope, and magnetometer, as shown below: MEMS Accelerometer Motion sensors like MEMS accelerometers are characterized by small size, lightweight, high sensitivity, and low cost. Accelerometer measures acceleration by measuring a change in capacitance. The primary component of the GY-80 multi-sensor board is the ADXL345 digital accelerometer. Accelerometers Operations are based on Newton’s (1) Second law of motion, which says that the acceleration (m/s2 ) of a body is directly proportional to and in the same direction as the net force (Newton) acting on the body and inversely proportional to its mass (gram). This sensing technique is known for its high accuracy, stability, low power dissipation, and simple structure to build. Bandwidth for a capacitive accelerometer is only a few hundred Hertz because of their physical geometry (spring), and the air trapped inside the IC acts as a damper. MEMS Gyroscope Microelectromechanical systems gyroscope that measures the angular rate by using Coriolis Effect comes with low cost, small device size, low power consumption, and high reliability leading to increasing applications in various inertial fields. Coriolis effect(2) or Coriolis force is nothing, but when an object is moving in a direction with a certain velocity and when any external angular rate is applied, the force will occur, which causes the perpendicular displacement of mass. MEMS gyroscope measurements are affected by errors, as they are prone to drift. In our next sections, we will see how this drift in values is handled through sensor fusion techniques. For the ANEW device, as we are using a GY-80 multi-sensor board, it comes with an L3G4200D gyroscope by default. In general, values between the accelerometer and gyroscope and combined in order to remove extra noise or drift in values from the gyroscope; this works as these sensors come with complementary filters. However, when we use these sensors on the tires of a traveling automobile, where rotations are very high, the noise will be more. These default complementary filters will not be helpful in ANEW device, which is planned for tires. These gyroscope readings are critical for us to predict the fatal toppling of vehicles. By default, the values between the accelerometer and gyroscope are integrated into our mathematical model. MEMS Magnetometer The third sensor in our GY-80 multi-sensor board is the MC5883L Magnetometer, a MEMS magnetometer that works on the Hall effect(3). Hall Effect sensors are used to measure the magnitude of a magnetic field. Its output voltage is directly proportional to the magnetic field strength through it. In general, a basic magnetometer that works on Hall Effect is quite sufficient to develop a digital compass by using a processing development environment. This can help in automating turn signals with proper calibration. As we are addressing here the toppling of vehicles which is caused due to angle of banking or misconstructed roads, we are using a GY-80 multi-sensor board consisting of an accelerometer and gyroscope. We have seen the first part of ANEW architecture, which is the ANEW device and the type of sensors we are going to use to develop a digital compass. Instead of directly moving to develop a digital compass, first, we have to address how we are going to manage additional drift that will come from the gyroscope. To handle these stochastic errors, we will be using an “Unscented Kalman Filter” in our algorithmic model; then, final values are displayed over a digital compass, which will result in automatic alerting. Kalman Filter Kalman filter is an optimal estimation algorithm; it is used to extract information about what you cannot measure from what you can. It is used to determine the best values from noisy measurements. Why do we say noisy measurements, and what is drift? For example, a cup of hot coffee measures 450C, the thermometer reads 44.60C for the first time and then reads 45.50C the second time. We will not get the same number each time. State estimation algorithms provide a way to combine all noisy values and give a better estimate. Our data from the GY-80 multi-sensor board is all sensor data that we receive, specifically gyroscope data, which is prone to much drift, so we need a better estimate algorithm to handle these noisy measurements. The technique here is to fuse data from multiple sensors to produce the correct estimate. In our case, it's data fusion between the accelerometer and gyroscope. Kalman filters are basically defined for linear systems. The Linear system process model defines the evolution of state from time k-1 to time k as(4): The above process model is for linear systems. Below is the probability density function to show the working principle of the Kalman filter for linear systems to find the position of a moving car(5). However, in our ANEW device model, we are going to fuse the accelerometer and gyroscope in order to handle drift. Due to the non-linear relationship between angular velocity and orientation, it is unclear whether the magnitude of the angular velocity and its distribution across the three gyroscope axes may alter the effect of the considered noise types(6). Now, taking into consideration of non-linear system, our set of linear system equations will change as below: State transition function and measurement function becomes non-linear. In this case of nonlinear transformations, the Kalman filter is not useful, whereas the Extended Kalman Filter comes in handy, which linearizes nonlinear functions. When a system is nonlinear and if it can be well approximated by linearization, then the Extended Kalman filter is a good option. However, it has a few drawbacks; the major drawback to highlight here is that Extended Kalman Filter is not a good option if the system is highly nonlinear. For proper system dynamics with the ANEW device, both the Kalman filter and Extended Kalman Filter will not help much as linearization becomes invalid as our system is highly nonlinear and cannot be approximated. The solution to approximate our highly nonlinear system is Unscented Kalman Filter. Unscented Kalman Filter’s (UKF) approximates the probability distribution. In this model, UKF selects a minimal set of sample points or sigma points so that their mean and covariance are exact. Each sigma point is propagated through a nonlinear system model; the mean and covariance of nonlinearly transformed points are calculated to compute the Gaussian distribution, which is a probability distribution. It is further used to calculate new state estimates. Below is the standard process model to implement the Unscented Kalman filter(7). Describe the difference equation and observation model with additive noise: Let's see this implementation in the ANEW device: We are trying to fuse three sensor data: accelerometer, magnetometer, and gyroscope, which has high noise. Below is the process flow structure of the Kalman filter-based position estimate algorithm. Here from the above equation, Zk is the output observed model with added noise vk. The first step is to apply an unscented transformation scheme to the augmented state: In the next step, we have to set the selection of Sigma points. Then in the model forecast step, each sigma point is propagated through the nonlinear process model, which is(7): Next, in the data assimilation step, we combine information obtained from the forecast step with the newly observed measure zk. As per the standard model, we need to obtain the square root matrix of covariance each time to compute a new set of sigma points, which gives us a measurement update summary. Digital Compass As we have seen two main segments in the ANEW device process flow, the GY-80 multi-sensor board and the Unscented Kalman filter algorithm and their working principles, let us see how we can set up a digital compass. A digital compass or an electronic compass is basically a combination of multiple MEMS sensors, which provide orientation and measurements in multiple applications. As highlighted earlier, to set up a digital compass, only a magnetometer is sufficient, but to avoid noise measurements, as shown in Fig.5 Model, fuse sensor readings from Gyroscope, Accelerometer, and Magnetometer for a position estimate. We will fuse all the sensor values to have the final output in the digital compass. Connect all sensors to Arduino board which works on I2C (Inter-Integrated Circuit) protocol. In the process development environment, where Arduino wire libraries are used to set up and start serial communication. Unique device addresses and their internal register address can be scraped from data sheets. ADXL345 datasheet (8). The loop section is similar for all sensors, where we calculate row data for every axis. The sensitivity of sensors is defined as per requirement here (+250 dps to +2000 dps). Angular from the gyroscope is calculated and given as an input parameter into our algorithmic model. Observer state in the model where we integrate measurements from the accelerometer and magnetometer are passed for the error update step. The final estimated values are then brought to a serial monitor that can be displayed on the digital compass. Based on these values, digital compass values are then set for automatic signaling and alerting. Finally, this data is again captured for drive analytics. The reasons highlighted for major road mishaps are available in Section 1. The movement of the tire is well-tracked to alert automatically during overtaking vehicles (auto turn alert based on tire movement). Fatal toppling due to the angle of banking or misconstructed roads are alerted by monitoring the angular rate of tire movement. Conclusion Self-driving cars are getting better every year. Auto turn signals have opened doors that technology is not only for self-driving cars. There is a larger scope for introducing new technologies. When we look at smart tires, as tires are the first thing that comes in contact with the road, there is plenty of untouched data from tires till now, which can be used for greater insights that can help increase road safety and mobility. Major road mishaps that occur during overtaking vehicles due to the angle of banking and misconstructed roads are well handled and predictable through the ANEW device and can be avoided to a maximum extent. Smart tires with ANEW device features can help to make risk-free decisions on misconstructed roads, avoiding fatal toppling, and can help build autonomous vehicle control systems with future tires. The core idea of this concept and help build tire-manufacturing companies to design a set of sustainable solutions to ward off various road mishaps.

By Rajesh Gaddipati
A Cloud-Native SCADA System for Industrial IoT Built With Apache Kafka
A Cloud-Native SCADA System for Industrial IoT Built With Apache Kafka

Industrial IoT and Industry 4.0 enable digitalization and innovation. SCADA control systems are a vital component of IT/OT modernization. The SCADA evolution started with monolithic applications and moved to networked and web-based platforms. This blog post explores building the 5th generation: A cloud-native SCADA infrastructure with Apache Kafka. A real-world case study explores the journey of a German system operator for electricity to show how such a journey to open and scalable real-time workloads and edge-to-cloud integration progressed. What Is a SCADA System? Supervisory control and data acquisition (SCADA) is a control system architecture comprising computers, networked data communications, and graphical user interfaces for high-level supervision of machines and processes. It also covers sensors and other devices, such as programmable logic controllers, which interface with process plants or machinery. While many people refer to specific commercial products, SCADA is a concept or architecture. It can include various components, functions, and products (from different vendors) on different levels: Wikipedia has a detailed article explaining the terms, history, components, and functions of SCADA. The evolution describes four generations of SCADA systems: First generation: Monolithic Second generation: Distributed Third generation: Networked Fourth generation: Web-based The evolution did not stop here. The following explores the 5. generation: Cloud-native and open SCADA systems. How Does Apache Kafka Help in Industrial IoT? Industrial IoT (IIoT) and Industry 4.0 create a few new challenges across industries: The need for a much bigger scale The demand for real-time information Hybrid architectures with mission-critical workloads at the edge and analytics in elastic public cloud infrastructure. A flexible Open API culture and data sharing across OT/IT environments and between partners (e.g., supplier, OEM, and mobility service). Apache Kafka is unique in its characteristics for IoT infrastructures, being very scalable (for transactional and analytical requirements and SLAs), reliable, and open. Hence, many new Industrial IoT projects adopt Apache Kafka for various use cases, including data hub between OT and IT, global integration of smart factories for analytics, predictive maintenance, customer 360, and many other scenarios. Cloud-Native Data Historian Powered by Apache Kafka (Operating at the Edge or in the Cloud) Data Historian is a well-known concept in Industrial IoT. It helps to ensure and improve the Overall Equipment Effectiveness (OEE). The term often overlaps with SCADA. Some people even use it as a synonym. Apache Kafka can be used as a component of a Data Historian to improve the OEE and reduce/eliminate the most common causes of equipment-based productivity loss in manufacturing (aka Six Big Losses): Continuous real-time data ingestion, processing, and monitoring 24/7 at scale is a crucial requirement for thriving Industry 4.0 initiatives. Data Streaming with Apache Kafka and its ecosystem brings enormous value to implementing these modern IoT architectures. Let's explore a concrete example of a cloud-native SCADA system. 50Hertz: A Case Study for a Cloud-Native SCADA System Built With Apache Kafka 50Hertz is a transmission system operator for electricity in Germany. The company secures electricity supply to 18 million people in northern and eastern Germany. The infrastructure must operate 24 hours, seven days a week. Various shift teams and a mission-critical SCADA infrastructure supervise and control the OT systems. 50hertz presented their OT/IT and SCADA modernization leveraging data streaming with Apache Kafka at the Confluent Data in Motion tour 2021. The on-demand video recording is available (the speech is in German, unfortunately). The Journey of 50Hertz in a Big Picture Look at this fantastic picture of 50Hertz's digital transformation journey from monolithic and proprietary legacy technology to a modern cloud-native integration platform powered by Kafka to modernize their IoT ecosystem, such as SCADA systems: Source: 50Hertz Notice the details in the above picture: The legacy infrastructure on the left side glues and patches together different components. It almost breaks together. No changes are possible to existing components. The new infrastructure on the ride side is based on flexible, standardized containers. It is easy to scale, add, or remove applications. The communication happens via standard sizes and schemas. The bridge in the middle shows the journey. This is a brownfield approach where the old and new world has to communicate with each other for many years. Over time, the company can shut down more and more of the legacy infrastructure. This is a great example of innovation in the energy sector! Let's explore the details of building a cloud-native SCADA system with Apache Kafka: Challenges of the Monolithic Legacy IoT Infrastructure The old IT/OT infrastructure and SCADA system are monolithic, proprietary, not scalable, and miss open APIs based on standard interfaces: Source: 50Hertz A very common infrastructure setup. Most existing OT/IT infrastructures have exactly the same challenges. This is how factories and production lines were built in the past decades. The consequence is inflexibility regarding software updates, hardware changes, security fixes, and no option for scalability or innovation. Applications run in disconnected mode and are air-gapped from the internet because the old Windows servers are not even supported and no longer get security patches. Digital transformation in the industrial space requires modernization. Legacy infrastructure still needs to be integrated into most scenarios. Not every company starts from scratch like Tesla, building brand new factories that are built with automation and digitalization from scratch. Cloud-Native SCADA With Kafka To Enable Innovation (And Legacy Integration) 50Hertz's next-generation Modular Control Center System (MCCS) leverages a central, scalable, event-based integration platform based on Confluent: Source: 50Hertz The first four containers include the Supervisory & Control (SCADA), Load Frequency Control (LFC), and Time Series Management & Forecasting applications. Each container can have multiple services/functions that follow the event-based microservices pattern. 50hertz provides central governance for security, protocols, and data schemas (CIM compliant) between platform containers/ modules. The cloud-native 24/7 SCADA system is developed in the cloud and deployed in safety-critical edge environments. More on Data Streaming and Industrial IoT If you want to learn more about real-world case studies, use cases, and technical architectures for data streaming with Apache Kafka in IIoT scenarios, check out these articles on my blog: Use cases and architectures for Apache Kafka for Industrial IoT and Manufacturing 4.0 Apache Kafka as Data Historian – an IIoT / Industry 4.0 Real-Time Data Lake OPC UA, MQTT, and Apache Kafka – The Trinity of Data Streaming in IoT Use cases and architecture for Kafka and MQTT Apache Kafka Landscape for Industrial IoT in the Automotive Industry Data streaming in air-gapped and zero-trust environments Real-Time Logistics, Shipping, and Transportation with Apache Kafka A Real-Time Supply, Chain Control Tower, powered by Kafka A postmodern ERP system built with Kafka If this is insufficient, please let me know what else you need to know... :-) Cloud-Native Architectures and Open API are the Future of Industrial IoT 50Hertz is a tremendous real-world case study about the modernization of the OT/IT world. A modern SCADA architecture requires real-time data processing at any scale, true decoupling between data producers and consumers (no matter what API these apps use), and open interfaces to integrate with any other application like MES, ERP, cloud services, and so on. From the IT side, this is nothing new. The last decade brought up scalable open-source technologies like Kafka, Spark, Flink, Iceberg, and many more, plus related fully managed, elastic cloud services like Confluent Cloud, Databricks, Snowflake, and so on. However, the OT side has to change. Instead of using monolithic legacy systems, unsupported and unstable Windows servers, and proprietary protocols, next-generation SCADA systems need to use the same cloud-native IT systems, adopt modern OT hardware/software combinations, and integrate the old and new world to enable digitalization and innovation in industry verticals like manufacturing, automotive, military, energy, and so on. What role plays data streaming in your Industrial IoT environments and OT/IT modernization? Do you run everything around Kafka in the cloud or operate hybrid edge scenarios? What tasks does Kafka take over — is it "just" the data hub, or are IoT use cases built with it, too? Let’s connect on LinkedIn and discuss it! Stay informed about new blog posts by subscribing to my newsletter.

By Kai Wähner CORE
Can Artificial Intelligence Provide Value in IoT Applications?
Can Artificial Intelligence Provide Value in IoT Applications?

If you are involved in the field of IoT technology, then it is essential to understand the importance and benefits of AI. In this section, I will discuss all the aspects related to AI so that you can get a clear picture of this topic. Today, IoT applications are in visual recognition, predicting a future event, and identifying an object. You might wonder, “what's so different about IoT applications?” They are used for many purposes, like home automation, healthcare, and manufacturing. They can also be used in smart cities. AI Algorithms Allow the System to Evaluate, Learn and Act Independently The AI algorithm allows the system to evaluate, learn and act independently. It can also be used to create a virtual brain or mind. The technology is designed in such a way that it learns from experience as well as having an innate ability to learn new things on its own. This means that if you want your device or system to learn certain skills, you need some data fed into it by yourself or someone else (e.g., an employee). Machine Learning Is Another Branch of AI Machine learning is another branch of AI. It allows the program to analyze huge data sets and make decisions on its own when required. Machine learning can be used for a variety of purposes, such as image classification, speech recognition, or recommendation engines. Machine learning uses data to learn patterns in order to automate processes that would otherwise require human intervention. For example, it might be used by an autonomous vehicle (AV) to recognize traffic signs and road conditions at night time so that it knows how fast it should drive on a particular road based on its surroundings rather than relying solely on instructions provided by its designers or other people who are familiar with these roads. Deep Learning Is the Best Example of Machine Learning Deep learning is a type of machine learning that uses artificial neural networks (ANNs) to perform pattern recognition and classification tasks. It relies on many layers of ANNs, where each layer has multiple neurons and learns from past experiences. The human brain is an example of a deep learning system, as it can perceive and process information in many different ways. This ability allows us to understand language, recognize faces, read books and make decisions based on our experiences or knowledge retrieved from previous situations. AI Requires a Significant Amount of Data AI technology requires a significant amount of data, and manufacturers can use data collected by IoT devices. The more data that is available to train an AI model, the better it will perform. For example, if you have an IoT device that monitors the temperature in your home and sends you alerts when it detects changes outside of normal parameters (such as a drop of two degrees), then you may be able to train a predictive model using this information and other factors such as weather patterns or historical patterns in order for your device to predict whether there will be another cold snap coming up soon. This type of analysis can help reduce costs associated with maintaining equipment such as heating systems or air conditioners because these systems are designed specifically for hot/cold temperatures based on their location; however, if they weren't regularly monitored throughout their lifetimes, they would run less efficiently over time due to wear-and-tear caused by cycles between heating/cooling cycles (and especially during winter months). IoT and AI Can Be Used to Give Instructions to Machines at Home or Work Without Speaking or Typing Anything As you can see from the above examples, AI and IoT are not just two technologies working together. They actually complement each other in some areas, making it possible for people to give instructions to machines at home or work without speaking or typing anything. In addition to this, they also have other benefits: Using AI in IoT applications allows us to create systems that can learn from their environment and adapt accordingly; this makes them more efficient than traditional approaches, which focus on predefined rules (e.g., "if these conditions are met, then do this"). For example, an autonomous vehicle might be able to identify traffic patterns better than a human driver could because it has access to all kinds of data about road conditions, including weather forecasts. So if there is a heavy rain forecast later today, the car would know not only how much time is left before sunset but also whether there'll still be enough light left after dark when driving around town looking for parking spots! We Have Come to the End of This Blog Where I have discussed all essential aspects concerning the use of AI for IoT applications. AI is a branch of computer science that deals with the design and development of intelligent agents, software that can sense its environment and take actions that maximize its chance of success at some goal. It has been applied to engineering, philosophy, law, biology, and economics for over 50 years. The first artificial intelligence (AI) system was created in 1956 by John McCarthy, who developed a test for machine learning called "the game of checkers," which would play against itself until it could beat its opponent in a fair way using only logical rules; this was done using two computers linked together via phone lines — later systems used dedicated hardware instead but were still limited by speed limitations from those original designs (they could only process one game state at once). Ultimately, AI is one of the most promising technologies and will play an important role in making IoT work smarter. The use of AI can help us solve problems related to data collection, analysis, and decision-making.

By Riley Adams
Top 5 Internet of Things (IoT) Trends to Expect in 2023
Top 5 Internet of Things (IoT) Trends to Expect in 2023

Computers and smartphones were the first devices connected to the internet. In the previous ten years, the human lifestyle has evolved with the introduction of intelligent TVs, electric kettles, and smart fridges. Moreover, people have been using intelligent alarms, cameras, and light bulbs. In the industrial space, employees have become habitual in working in a smart machinery environment, such as using robots. According to McKinsey and Company, more than 43 billion devices will be linked to the internet in 2023. This will generate, collect, and help people to utilize data in various ways. Key Market Insights As per Markets and Markets Research, the global IoT market size will reach 650.5 billion dollars by 2026, showing a CAGR of 16.7% from 2021-2026. According to the report, essential market drivers of the industry are below: Access to Low-Cost and Power Sensor Technology When it comes to IoT devices, sensory instruments play an essential role. Sensor technologies can generate data about any physical event, such as orientation, motion, light, humidity, and temperature. It can even monitor biometric elements, e.g., blood pressure and heart rate. Innovation in sensor technologies will expand IoT capabilities even more. In the past decade, the cost of sensory technologies was high, resulting in limited adoption in the industrial sector. However, with time, the decline in prices has increased adopted rates across modern-day organizations. For instance, the cost of low-frequency passive categories of Radio Frequency Identification (RFID) tags and sensors has reduced in the past decade. Moreover, the average cost of sensor technologies has decreased from 1.30 dollars per unit to 0.38 dollars per unit. Growth in the IoT market has ensured the widespread deployment of low-cost devices, contributing to technological advancement. Top 5 Internet of Things Trends to Keep a Tab On The Internet of Things (IoT) is a system of connected devices, digital machines, and users with unique identifiers and transferability over a network without human-to-human or human-to-machine interaction. Following are the five paramount trends of IoT that will transform the world in 2023. 1. Building Realistic Digital Twins and Enterprise Metaverse This is a merger of two significant tech trends that will dictate the application of innovative technology across various industries during 2023. For modern-day businesses, the metaverse will play an important role in bridging the gap between the virtual and real worlds. With IoT sensors, the creation of realistic digital twins will become easier. Corporate professionals can use Virtual Reality (VR) headsets to step inside the digital twins and understand their functioning to influence business outcomes. 2. Discouraging Fraud Through Enhanced IoT Security IoT devices improve the lifestyle of users, but the loopholes in the network attract cybercriminals for exploitation. In other words, more connected devices mean more opportunities for fraudsters to accomplish their malicious goals. With an increase in the number of devices during 2023, manufacturers and security experts will gear up to combat fraud attempts from bad actors. This way, professionals can ensure unbeatable security around the sensitive data of individuals. In the USA, the White House National Security Council has declared that it will establish standardized security labeling for consumer IoT device manufacturers in the first quarter of 2023. In this case, users can quickly identify the risks linked with IoT systems. In addition, the United Kingdom (UK) will introduce its Product Security and Telecommunication Infrastructure (PTSI) bill to address security issues in IoT systems. 3. Utilizing the Internet of Healthcare Things The healthcare sector presents a huge growth opportunity for IoT technology as the financial worth of internet of things based health devices will reach around $267 billion by 2023. A massive game changer is the use of wearables and in-home sensors to empower healthcare professionals to monitor patients. This not only provides 24/7 medical care but also frees up valuable resources for emergency care. In 2023, more patients will become familiar with virtual hospital wards, where sensors and telemedicine approaches will help professionals deal with patients. The use of identity verification services can also help the healthcare sector discourage bad actors from exploiting the system. 4. Gaining Insight Into Governance and Regulations in the IoT Space During 2023, the European Union (EU) will introduce legislation that will require manufacturers and vendors of smart devices to follow stringent regulations. This applies to customer data collection and storage and what should be done to ensure data privacy. In Asia, 2023 brings a three-year plan by the Chinese government to introduce policies for the mass adoption of IoT technology. In China and elsewhere in the world, IoT can drive massive growth in the corporate sector. First, however, experts should create a plan to circumvent problems with privacy and personal rights. 5. Using IoT and Cloud Computing The combination of cloud computing and IoT can increase data storage, improve data processing and ensure greater business scalability. This also reduces infrastructure costs and enhances security. IoT and cloud computing also facilitates business experts to make real-time decisions and accomplish goals by automating recurring tasks. According to the Research and Markets report, the global cloud computing in industrial IoT market size will reach a financial worth of around $8.159 billion by 2026, showing a CAGR of 10.98%. The Bottom Line Several IoT services have entered the market over the past five years. With time, companies realize the potential of IoT systems to enhance security, automate mundane tasks and streamline data processing. Over the next ten years, the IoT market size will keep growing exponentially. Hence, the internet of things will be a potent force behind the transformation of human society.

By Emily Daniel
How to Use MQTT in Java
How to Use MQTT in Java

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth. MQTT today is used in a wide variety of industries, such as automotive, manufacturing, telecommunications, oil and gas, etc. This article introduces how to use MQTT in the Java project to realize the functions of connecting, subscribing, unsubscribing, publishing, and receiving messages between the client and the broker. Add Dependency The development environment for this article is: Build tool: Maven IDE: IntelliJ IDEA Java: JDK 1.8.0 We will use Eclipse Paho Java Client as the client, which is the most widely used MQTT client library in the Java language. Add the following dependencies to the pom.xml file. <dependencies> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>org.eclipse.paho.client.mqttv3</artifactId> <version>1.2.5</version> </dependency> </dependencies> Create an MQTT Connection MQTT Broker This article will use the public MQTT broker created based on EMQX Cloud. The server access information is as follows: Broker: broker.emqx.io TCP Port: 1883 SSL/TLS Port: 8883 Connect Set the basic connection parameters of MQTT. Username and password are optional. String broker = "tcp://broker.emqx.io:1883"; // TLS/SSL // String broker = "ssl://broker.emqx.io:8883"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; Then create an MQTT client and connect to the broker. MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); client.connect(options); Instructions: MqttClient: MqttClient provides a set of methods that block and return control to the application program once the MQTT action has been completed. MqttClientPersistence: Represents a persistent data store used to store outbound and inbound messages while they are in flight, enabling delivery to the QoS specified. MqttConnectOptions: Holds the set of options that control how the client connects to a server. Here are some common methods: setUserName: Sets the user name to use for the connection. setPassword: Sets the password to use for the connection. setCleanSession: Sets whether the client and server should remember the state across restarts and reconnects. setKeepAliveInterval: Sets the "keep alive" interval. setConnectionTimeout: Sets the connection timeout value. setAutomaticReconnect: Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. Connecting With TLS/SSL If you want to use a self-signed certificate for TLS/SSL connections, add bcpkix-jdk15on to the pom.xml file. <!-- https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on --> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk15on</artifactId> <version>1.70</version> </dependency> Then create the SSLUtils.java file with the following code. package io.emqx.mqtt; import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.bouncycastle.openssl.PEMKeyPair; import org.bouncycastle.openssl.PEMParser; import org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import java.io.BufferedInputStream; import java.io.FileInputStream; import java.io.FileReader; import java.security.KeyPair; import java.security.KeyStore; import java.security.Security; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; public class SSLUtils { public static SSLSocketFactory getSocketFactory(final String caCrtFile, final String crtFile, final String keyFile, final String password) throws Exception { Security.addProvider(new BouncyCastleProvider()); // load CA certificate X509Certificate caCert = null; FileInputStream fis = new FileInputStream(caCrtFile); BufferedInputStream bis = new BufferedInputStream(fis); CertificateFactory cf = CertificateFactory.getInstance("X.509"); while (bis.available() > 0) { caCert = (X509Certificate) cf.generateCertificate(bis); } // load client certificate bis = new BufferedInputStream(new FileInputStream(crtFile)); X509Certificate cert = null; while (bis.available() > 0) { cert = (X509Certificate) cf.generateCertificate(bis); } // load client private key PEMParser pemParser = new PEMParser(new FileReader(keyFile)); Object object = pemParser.readObject(); JcaPEMKeyConverter converter = new JcaPEMKeyConverter().setProvider("BC"); KeyPair key = converter.getKeyPair((PEMKeyPair) object); pemParser.close(); // CA certificate is used to authenticate server KeyStore caKs = KeyStore.getInstance(KeyStore.getDefaultType()); caKs.load(null, null); caKs.setCertificateEntry("ca-certificate", caCert); TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509"); tmf.init(caKs); // client key and certificates are sent to server so it can authenticate KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); ks.load(null, null); ks.setCertificateEntry("certificate", cert); ks.setKeyEntry("private-key", key.getPrivate(), password.toCharArray(), new java.security.cert.Certificate[]{cert}); KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory .getDefaultAlgorithm()); kmf.init(ks, password.toCharArray()); // finally, create SSL socket factory SSLContext context = SSLContext.getInstance("TLSv1.2"); context.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null); return context.getSocketFactory(); } } Set options as follows. String broker = "ssl://broker.emqx.io:8883"; // Set socket factory String caFilePath = "/cacert.pem"; String clientCrtFilePath = "/client.pem"; String clientKeyFilePath = "/client.key"; SSLSocketFactory socketFactory = getSocketFactory(caFilePath, clientCrtFilePath, clientKeyFilePath, ""); options.setSocketFactory(socketFactory); Publish MQTT Messages Create a class PublishSample that will publish a Hello MQTT message to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.MqttClient; import org.eclipse.paho.client.mqttv3.MqttConnectOptions; import org.eclipse.paho.client.mqttv3.MqttException; import org.eclipse.paho.client.mqttv3.MqttMessage; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class PublishSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; String content = "Hello MQTT"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // connect client.connect(options); // create message and setup QoS MqttMessage message = new MqttMessage(content.getBytes()); message.setQos(qos); // publish message client.publish(topic, message); System.out.println("Message published"); System.out.println("topic: " + topic); System.out.println("message content: " + content); // disconnect client.disconnect(); // close client client.close(); } catch (MqttException e) { throw new RuntimeException(e); } } } Subscribe Create a class SubscribeSample that will subscribe to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.*; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class SubscribeSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "subscribe_client"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); // connect options MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // setup callback client.setCallback(new MqttCallback() { public void connectionLost(Throwable cause) { System.out.println("connectionLost: " + cause.getMessage()); } public void messageArrived(String topic, MqttMessage message) { System.out.println("topic: " + topic); System.out.println("Qos: " + message.getQos()); System.out.println("message content: " + new String(message.getPayload())); } public void deliveryComplete(IMqttDeliveryToken token) { System.out.println("deliveryComplete---------" + token.isComplete()); } }); client.connect(options); client.subscribe(topic, qos); } catch (Exception e) { e.printStackTrace(); } } } MqttCallback: connectionLost(Throwable cause): This method is called when the connection to the server is lost. messageArrived(String topic, MqttMessage message): This method is called when a message arrives from the server. deliveryComplete(IMqttDeliveryToken token): Called when delivery for a message has been completed and all acknowledgments have been received. Test Next, run SubscribeSample to subscribe to the mqtt/test topic. Then run PublishSample to publish the message on the mqtt/test topic. We will see that the publisher successfully publishes the message, and the subscriber receives it. Summary Now we are done using Paho Java Client as an MQTT client to connect to the public MQTT server and implement message publishing and subscription. The full code is available on GitHub.

By Zhiwei Yu
Adaptive Sampling in an IoT World
Adaptive Sampling in an IoT World

The Internet of Things (IoT) is now an omnipresent network of connected devices that communicate and exchange data over the internet. These devices can be anything from industrial machinery monitoring, weather and air quality monitoring systems, and security cameras to smart thermostats and refrigerators to wearable fitness trackers. As the number of IoT devices increases, so does the volume of data they generate. A typical application of this data is to improve the performance and efficiency of the systems being monitored and gain insights into their users' behavior and preferences. However, the sheer volume makes it challenging to collect and analyze such data. Furthermore, a large volume of data can overwhelm both the communication channels and the limited amounts of power and processing on edge devices. This is where adaptive sampling techniques come into play. These techniques can reduce workload, maximize resource utilization requirements, and improve the accuracy and reliability of the data. Adaptive Sampling Adaptive sampling techniques "adapt" their sampling or transmission frequency based on the specific needs of the device or changes in the system of interest. For example, consider a device on a limited data plan, a low-power battery, or a compute-restricted platform. Examples: A temperature monitoring sensor may collect data more frequently when there are rapid changes in temperature and less frequently when the temperature remains stable. A security camera captures images at a faster frame rate or higher resolution when there is some activity in the field of view. An air particulate meter increases its sampling rate when it notices the air quality deteriorating. A self-driving car constantly senses the environment but may send special edge cases back to a central server for edge case discovery. What and Where to Sample Your expected resource utilization improvements guide what to and where to sample. There are two sites to implement sampling: At measurement or transmission. Sampling at measurement: The edge device will only measure (or update measurement frequency) when an algorithm (either running on the edge device or on a server) deems fit. Reduces power and computing. Periodically improves network bandwidth utilization. Sampling at transmission: The edge device measures continuously and processes this with some algorithm running locally. If the sample is high entropy, upload data to the cloud/ server. Power and compute at measurement unaffected. Reduces network bandwidth utilization. Identifying Important and Useful Data We have often heard the term "data, data, data." But is all data equal? Not really. Data is most useful when it brings information. This is true even for Big Data applications that are admittedly data-hungry. For example, Machine Learning and Statistical systems all need "high quality" data, not just large quantities. So how do we find high-quality data? Entropy! Entropy Entropy is the measurement of uncertainty in the system. In a more intuitive explanation, entropy is the measure of "information" in a system. For example, a system with a constant value or constant rate of change (say temperature). In optimal working conditions, there is no new information. You will get the expected measurement every time you sample; this is low entropy. On the other hand, if the temperature changes "noisily" or "unexpectedly," the entropy in the system is high; there is new and interesting information. The more unexpected the change, the larger the entropy and the more important that measurement. Entropy in Information Theory. When the probability of occurrence 'p(x)' is low, entropy is high, and vice versa. A measurement probability of 1 (something we really expect is going to happen) yields 0 entropy, and rightly so. This principle of "informational value" is central to adaptive sampling. Some State of the Art Techniques The basic logic flow in all adaptive techniques is: Using "Model Predictions" to understand the information contained in the new measurements (sampled data). These "Model Prediction" algorithms analyze past data and identify patterns that help to predict if a high entropy event is likely to occur, allowing the system to focus its data collection efforts. The magic lies in how well we can model our predictions. Adaptive Filtering methods: These methods apply filtering techniques on measurements to estimate measurements in the next time steps. These could be FIR (Finite Impulse Response) or IIR (Infinite Impulse Response) techniques like: Weighted moving average (can be made more expressive with probabilistic or exponential treatment) Sliding window-based methods They are relatively low in complexity but may have a non-trivial memory footprint to buffer past measurements. Need small amounts of data for configuring. Kalman Filter methods: Kalman Filters are efficient and have small memory footprints. They can be relatively complex and hard to configure but work well when tuned correctly. Need small amounts of data for configuring. Machine Learning methods: Using past collected data, we can build machine learning models to predict the next state of the system under observation. These are the more complex but also generalize well. Depending on the task and complexity, large amounts of data may be needed for training. Major Benefits Improved efficiency: By collecting and analyzing data from a subset of the available data, IoT devices can reduce workload and resource requirements. This helps improve efficiency and performance and reduces data collection, analysis, and storage costs. Better accuracy: By selecting the data sources that are most likely to provide the most valuable or informative data, adaptive sampling techniques can help to improve the accuracy and reliability of the data. This can be particularly useful for making decisions or taking actions based on the data. Greater flexibility: Adaptive sampling techniques allow IoT devices to adapt to changes in the data sources or the data itself. This can be particularly useful for devices deployed in dynamic or changing environments, where the data may vary over time. Reduced post-processing complexity: By collecting and analyzing data from a subset of the available data sources, adaptive sampling techniques can help to reduce the complexity of the data and make it easier to understand and analyze. This can be particularly useful for devices with limited processing power or storage capacity or teams with limited data science/ engineering resources. Potential Limitations Selection bias: By selecting a subset of the data, adaptive sampling techniques may introduce selection bias into the data. This can occur if the models and systems are trained on a specific type of data, which is not representative of the overall data population, leading to inaccurate or unreliable conclusions. Sampling errors: There is a risk of errors occurring in the sampling process, which can affect the accuracy and reliability of the data. These errors may be due to incorrect sampling procedures, inadequate sample size, or non-optimal configurations. Resource constraints: Adaptive sampling techniques may require additional processing power, storage capacity, or bandwidth, which may not be available on all IoT devices. This can limit adaptive sampling techniques on specific devices or in certain environments. Runtime complexity: Adaptive sampling techniques may involve the use of machine learning algorithms or other complex processes, which can increase the complexity of the data collection and analysis process. This could be challenging for devices with limited processing power or storage capacity. Workarounds Staged deployment: Instead of deploying a sampling scheme on all devices, deploy on small but representative test groups. Then the "sampled" data from these groups can be analyzed against the more expansive datasets for biases and domain mismatches. Again, this can be done in stages and iteratively, ensuring our system is never highly biased. Ensemble of Sampling techniques: Different devices can be armed with slightly different sampling techniques, varying from sample sizes and windows to different algorithms. Sure, this increases the complexity of post-processing, but it takes care of sampling errors and selection biases. Resource constraints and Runtime complexity are hard to mitigate. Unfortunately, that is the cost of implementing better sampling techniques. Finally, test, test, and more tests. Takeaways Adaptive Sampling can be a useful tool for IoT if one can model the system being observed. We briefly introduced a few modeling approaches with varying complexities. We discussed some benefits, challenges, and solutions for deployment.

By Ankur Agarwal

Top IoT Experts

expert thumbnail

Frank Delporte

Java Developer - Technical Writer,
CodeWriter.be

Frank Delporte is a technical writer at Azul, blogger on webtechie.be and foojay.io, author of "Getting started with Java on Raspberry Pi" (https://webtechie.be/books/), and contributor to Pi4J. Frank blogs about his experiments with Java, sometimes combined with electronic components, on the Raspberry Pi.
expert thumbnail

Tim Spann

Principal Developer Advocate,
Cloudera

https://github.com/tspannhw/SpeakerProfile/blob/main/README.md Tim Spann is a Principal Developer Advocate in Data In Motion for Cloudera. He works with Apache NiFi, Apache Pulsar, Apache Kafka, Apache Flink, Flink SQL, Apache Pinot, Trino, Apache Iceberg, DeltaLake, Apache Spark, Big Data, IoT, Cloud, AI/DL, machine learning, and deep learning. Tim has over a ten years of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton & NYC on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science
expert thumbnail

Carsten Rhod Gregersen

Founder, CEO,
Nabto

Carsten Rhod Gregersen is the CEO and Founder of Nabto, a P2P IoT connectivity provider that enables remote control of devices with secure end-to-end encryption.
expert thumbnail

Emily Newton

Editor-in-Chief,
Revolutionized

Emily Newton is a journalist who regularly covers stories for the tech and industrial sectors. She loves seeing the impact technology can have on every industry.

The Latest IoT Topics

article thumbnail
Experts on How to Make Product Development More Predictable and Cost-Efficient
With eight different product development professionals weighing in on common issues and concerns, we hope to provide a pretty high level of insight.
March 21, 2023
by Sasha Baglai
· 688 Views · 1 Like
article thumbnail
Use AWS Controllers for Kubernetes To Deploy a Serverless Data Processing Solution With SQS, Lambda, and DynamoDB
Discover how to use AWS Controllers for Kubernetes to create a Lambda function, SQS, and DynamoDB table and wire them together to deploy a solution.
March 20, 2023
by Abhishek Gupta CORE
· 1,797 Views · 1 Like
article thumbnail
What To Know Before Implementing IIoT
Industrial Internet of Things (IIoT) technology offers many benefits for manufacturers to improve operations. Here’s what to consider before implementation.
March 17, 2023
by Zac Amos
· 2,914 Views · 1 Like
article thumbnail
Apache Kafka Is NOT Real Real-Time Data Streaming!
Learn how Apache Kafka enables low latency real-time use cases in milliseconds, but not in microseconds; learn from stock exchange use cases at NASDAQ.
March 17, 2023
by Kai Wähner CORE
· 2,890 Views · 1 Like
article thumbnail
PySpark Data Pipeline To Cleanse, Transform, Partition, and Load Data Into Redshift Database Table
In this article, we will discuss how to create an optimized data pipeline using PySpark and load the data into a Redshift database table.
March 16, 2023
by Amlan Patnaik
· 1,545 Views · 1 Like
article thumbnail
Use After Free: An IoT Security Issue Modern Workplaces Encounter Unwittingly
Use After Free is one of the two major memory allocation-related threats affecting C code. It is preventable with the right solutions and security strategies.
March 16, 2023
by Joydeep Bhattacharya CORE
· 1,999 Views · 1 Like
article thumbnail
Real-Time Analytics for IoT
If you're searching for solutions for IoT data in the age of customer-centric data, real-time analytics using the Apache Pinot™ database is a great solution.
March 16, 2023
by David G. Simmons CORE
· 2,541 Views · 3 Likes
article thumbnail
From Data Stack to Data Stuck: The Risks of Not Asking the Right Data Questions
Why is applying a methodology like SOFT important? And, even more, what risks can we encounter if we’re not doing so? This post aims to cover both aspects.
March 16, 2023
by Francesco Tisiot
· 2,209 Views · 1 Like
article thumbnail
5 Common Firewall Misconfigurations and How to Address Them
Properly setting up your firewall can reduce the likelihood of data breaches. A few common firewall misconfigurations to watch for are mismatched authentication standards, open policy configurations, non-compliant services, unmonitored log output, and incorrect test systems data.
March 16, 2023
by Zac Amos
· 1,684 Views · 1 Like
article thumbnail
What Is the Difference Between VOD and OTT Streaming?
With the growing popularity of streaming services, this article discusses the distinction between OTT and VOD and their differences.
March 15, 2023
by Anna Smith
· 1,804 Views · 1 Like
article thumbnail
Using AI To Optimize IoT at the Edge
Artificial intelligence has the potential to revolutionize the combined application of IoT and edge computing. Here are some thought-provoking possibilities.
March 15, 2023
by Devin Partida
· 2,198 Views · 1 Like
article thumbnail
7 Salesforce CRM Integration Methods You Must Know About
Salesforce offers various ways to integrate data/apps hosted on on-premise and cloud systems. Let's review the various salesforce CRM integration options.
March 15, 2023
by Richa Pokhriyal
· 3,784 Views · 1 Like
article thumbnail
How To Use Artificial Intelligence to Ensure Better Security
In this article, readers will learn how companies can leverage Artificial Intelligence (AI) to improve data security (cybersecurity) in these five ways.
March 14, 2023
by Chandra Shekhar
· 1,869 Views · 1 Like
article thumbnail
Pretty Data All in Neat Rows
How to take "flat" (i.e., columnar) JSON data and turn it into row-based information using the jq utility and a little ingenuity.
March 14, 2023
by Leon Adato
· 1,111 Views · 1 Like
article thumbnail
Use Golang for Data Processing With Amazon Kinesis and AWS Lambda
Are you interested in learning how to use Golang and AWS Lambda to build a serverless solution? Learn more in this tutorial.
March 14, 2023
by Abhishek Gupta CORE
· 4,127 Views · 2 Likes
article thumbnail
OWASP Kubernetes Top 10
The OWASP Kubernetes Top 10 puts all possible risks in order of overall commonality or probability.
March 13, 2023
by Nigel Douglas
· 8,686 Views · 7 Likes
article thumbnail
Stateful Stream Processing With Memphis and Apache Iceberg
In this article, readers will use a tutorial to learn how to use Apache Iceberg on AWS S3 to process and enrich large scale data, including code and images.
March 12, 2023
by Idan Asulin
· 3,593 Views · 3 Likes
article thumbnail
Developers' Guide: How to Execute Lift and Shift Migration
This article reviews lift and shift migration, how to prepare your application, lift and shift migration strategies, and post migration considerations.
March 11, 2023
by Tejas Kaneriya
· 3,361 Views · 1 Like
article thumbnail
LazyPredict: A Utilitarian Python Library to Shortlist the Best ML Models for a Given Use Case
Discussing LazyPredict to create simple ML models without writing a lot of code, and determine which models perform best without modifying their parameters!
March 9, 2023
by Sanjay Kumar
· 3,503 Views · 2 Likes
article thumbnail
Building Custom Solutions vs. Buy-and-Build Software
This article explores building custom solutions vs. buy-and-build software and describes the challenges of building a FIX engine.
March 9, 2023
by Rob Austin CORE
· 1,164 Views · 4 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: