DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Coding

Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.

Functions of Coding

Frameworks

Frameworks

A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.

Java

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

JavaScript

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

Languages

Languages

Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.

Tools

Tools

Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.

Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Refcard #288
Getting Started With Low-Code Development
Getting Started With Low-Code Development
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks
Trend Report
Modern Web Development
Modern Web Development

DZone's Featured Coding Resources

Trend Report

Database Systems

Every modern application and organization collects data. With that, there is a constant demand for database systems to expand, scale, and take on more responsibilities. Database architectures have become more complex, and as a result, there are more implementation choices. An effective database management system allows for quick access to database queries, and an organization can efficiently make informed decisions. So how does one effectively scale a database system and not sacrifice its quality?Our Database Systems Trend Report offers answers to this question by providing industry insights into database management selection and evaluation criteria. It also explores database management patterns for microservices, relational database migration strategies, time series compression algorithms and their applications, advice for the best data governing practices, and more. The goal of this report is to set up organizations for scaling success.

Database Systems
PostgreSQL: Bulk Loading Data With Node.js and Sequelize

PostgreSQL: Bulk Loading Data With Node.js and Sequelize

By Brett Hoyer
Whether you're building an application from scratch with zero users, or adding features to an existing application, working with data during development is a necessity. This can take different forms, from mock data APIs reading data files in development, to seeded database deployments closely mirroring an expected production environment. I prefer the latter as I find fewer deviations from my production toolset leads to fewer bugs. A Humble Beginning For the sake of this discussion, let's assume we're building an online learning platform offering various coding courses. In its simplest form, our Node.js API layer might look like this. JavaScript // server.js const express = require("express"); const App = express(); const courses = [ {title: "CSS Fundamentals", "thumbnail": "https://fake-url.com/css"}], {title: "JavaScript Basics", "thumbnail": "https://fake-url.com/js-basics"}], {title: "Intermediate JavaScript", "thumbnail": "https://fake-url.com/intermediate-js"} ]; App.get("/courses", (req, res) => { res.json({data: courses}); }); App.listen(3000); If all you need is a few items to start building your UI, this is enough to get going. Making a call to our /courses endpoint will return all of the courses defined in this file. However, what if we want to begin testing with a dataset more representative of a full-fledged database-backed application? Working With JSON Suppose we inherited a script exporting a JSON-array containing thousands of courses. We could import the data, like so. JavaScript // courses.js module.exports = [ {title: "CSS Fundamentals", "thumbnail": "https://fake-url.com/css"}], {title: "JavaScript Basics", "thumbnail": "https://fake-url.com/js-basics"}], {title: "Intermediate JavaScript", "thumbnail": "https://fake-url.com/intermediate-js"}, ... ]; // server.js ... const courses = require("/path/to/courses.js"); ... This eliminates the need to define our mock data within our server file, and now we have plenty of data to work with. We could enhance our endpoint by adding parameters to paginate the results and set limits on how many records are returned. But, what about allowing users to post their own courses? How about editing courses? This solution gets out of hand quickly as you begin to add functionality. We'll have to write additional code to simulate the features of a relational database. After all, databases were created to store data. So, let's do that. Bulk Loading JSON With Sequelize For an application of this nature, PostgreSQL is an appropriate database selection. We have the option of running PostgreSQL locally or connecting to a PostgreSQL-compatible cloud-native database, like YugabyteDB Managed. Apart from being a highly-performant distributed SQL database, developers using YugabyteDB benefit from a cluster that can be shared by multiple users. As the application grows, our data layer can scale out to multiple nodes and regions. After creating a YugabyteDB Managed account and spinning up a free database cluster, we're ready to seed our database and refactor our code, using Sequelize. The Sequelize ORM allows us to model our data to create database tables and execute commands. Here's how that works. First, we install Sequelize from our terminal. Shell // terminal > npm i sequelize Next, we use Sequelize to establish a connection to our database, create a table, and seed our table with data. JavaScript // database.js // JSON-array of courses const courses = require("/path/to/courses.js"); // Certificate file downloaded from YugabyteDB Managed const cert = fs.readFileSync(CERTIFICATE_PATH).toString(); // Create a Sequelize instance with our database connection details const Sequelize = require("sequelize"); const sequelize = new Sequelize("yugabyte", "admin", DB_PASSWORD, { host: DB_HOST, port: "5433", dialect: "postgres", dialectOptions: { ssl: { require: true, rejectUnauthorized: true, ca: cert, }, }, pool: { max: 5, min: 1, acquire: 30000, idle: 10000, } }); // Defining our Course model export const Course = sequelize.define( "course", { id: { type: DataTypes.INTEGER, autoIncrement: true, primaryKey: true, }, title: { type: DataTypes.STRING, }, thumbnail: { type: DataTypes.STRING, }, } ); async function seedDatabase() { try { // Verify that database connection is valid await sequelize.authenticate(); // Create database tables based on the models we've defined // Drops existing tables if there are any await sequelize.sync({ force: true }); // Creates course records in bulk from our JSON-array await Course.bulkCreate(courses); console.log("Courses created successfully!"); } catch(e) { console.log(`Error in seeding database with courses: ${e}`); } } // Running our seeding function seedDatabase(); By leveraging Sequelize’s bulkCreate method, we’re able to insert multiple records in one statement. This is more performant than inserting requests one at a time, like this. JavaScript . . . // JSON-array of courses const courses = require("/path/to/courses.js"); async function insertCourses(){ for(let i = 0; i < courses.length; i++) { await Course.create(courses[i]); } } insertCourses(); Individual inserts come with the overhead of connecting, sending requests, parsing requests, indexing, closing connections, etc. on a one-off basis. Of course, some of these concerns are mitigated by connection pooling, but generally speaking the performance benefits of inserting in bulk are immense, not to mention far more convenient. The bulkCreate method even comes with a benchmarking option to pass query execution times to your logging functions, should performance be of primary concern. Now that our database is seeded with records, our API layer can use this Sequelize model to query the database and return courses. JavaScript // server.js const express = require("express"); const App = express(); // Course model exported from database.js const { Course } = require("/path/to/database.js") App.get("/courses", async (req, res) => { try { const courses = await Course.findAll(); res.json({data: courses}); } catch(e) { console.log(`Error in courses endpoint: ${e}`); } }); App.listen(3000); Well, that was easy! We've moved from a static data structure to a fully-functioned database in no time. What if we're provided the dataset in another data format, say, a CSV file exported from Microsoft Excel? How can we use it to seed our database? Working With CSVs There are many NPM packages to convert CSV files to JSON, but none are quite as easy to use as csvtojson. Start by installing the package. Shell // terminal > npm i csvtojson Next, we use this package to convert our CSV file to a JSON-array, which can be used by Sequelize. // courses.csv title,thumbnail CSS Fundamentals,https://fake-url.com/css JavaScript Basics,https://fake-url.com/js-basics Intermediate JavaScript,https://fake-url.com/intermediate-js JavaScript // database.js ... const csv = require('csvtojson'); const csvFilePath = "/path/to/courses.csv"; // JSON-array of courses from CSV const courses = await csv().fromFile(csvFilePath); ... await Course.bulkCreate(courses); ... Just as with our well-formatted courses.js file, we're able to easily convert our courses.csv file to bulk insert records via Sequelize. Conclusion Developing applications with hardcoded data can only take us so far. I find that investing in tooling early in the development process sets me on the path toward bug-free coding (or so I hope!) By bulk-loading records, we’re able to work with a representative dataset, in a representative application environment. As I’m sure many agree, that’s often a major bottleneck in the application development process. More

Refcard #333

Drupal 9 Essentials

By Cindy McCourt
Drupal 9 Essentials
Spring Cloud: How To Deal With Microservice Configuration (Part 1)
Spring Cloud: How To Deal With Microservice Configuration (Part 1)
By Mario Casari
How To Generate Code Coverage Report Using JaCoCo-Maven Plugin
How To Generate Code Coverage Report Using JaCoCo-Maven Plugin
By Harshit Paul
Iptables Basic Commands for Novice
Iptables Basic Commands for Novice

While working with customers or while reproducing scenarios where I would have to allow or drop connectivity to certain ports in Linux OS, I have always found iptables command very helpful. This article is for users who don't have insights into networking or, specifically, iptables command. This article would help such users quickly get a list of all rules and drop or allow traffic to ports. I have tested these commands in Ubuntu 22. Shell $ uname -a Linux cpandey 5.15.0-57-generic #63-Ubuntu SMP Thu Nov 24 13:43:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.1 LTS" So let us learn together. 1. Let us have a basic understanding of what iptables command is first. It is a standard firewall available with Linux OS. This command(with t switch) can modify any of the network table filters, nat, mangle, raw, and security. Here the filter is the default table (if the no -t option is passed); it is used for packet filtering. It contains the built-in chains INPUT (for packets destined to local sockets), FORWARD (for packets being routed through the box), and OUTPUT (for locally-generated packets). Shell $ man iptables SYNOPSIS iptables [-t table] {-A|-C|-D} chain rule-specification rule-specification = [matches...] [target] match = -m matchname [per-match-options] target = -j targetname [per-target-options] DESCRIPTION Iptables and ip6tables are used to set up, maintain, and inspect the tables of IPv4 and IPv6 packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user-defined chains. Each chain is a list of rules which can match a set of packets. Each rule specifies what to do with a packet that matches. This is called a `target', which may be a jump to a user-defined chain in the same table. TARGETS A firewall rule specifies criteria for a packet and a target. If the packet does not match, the next rule in the chain is examined; if it does match, then the next rule is specified by the value of the target, which can be the name of a user-defined chain, one of the targets described in iptables-extensions(8), or one of the special values ACCEPT, DROP or RETURN. ACCEPT means to let the packet through. DROP means to drop the packet on the floor. RETURN means stop traversing this chain and resume at the next rule in the previous (calling) chain. If the end of a built-in chain is reached or a rule in a built-in chain with target RETURN is matched, the target specified by the chain policy determines the fate of the packet. TABLES There are currently five independent tables (which tables are present at any time depends on the kernel configuration options and which modules are present). -t, --table table This option specifies the packet matching table which the command should operate on. If the kernel is configured with automatic module loading, an attempt will be made to load the appropriate module for that table if it is not al‐ ready there. The tables are as follows: filter: This is the default table (if no -t option is passed). It contains the built-in chains INPUT (for packets destined to local sockets), FORWARD (for packets being routed through the box), and OUTPUT (for locally-generated packets). nat: This table is consulted when a packet that creates a new connection is encountered. It consists of four built-ins: PREROUTING (for altering packets as soon as they come in), INPUT (for altering packets destined for local sock‐ ets), OUTPUT (for altering locally-generated packets before routing), and POSTROUTING (for altering packets as they are about to go out). IPv6 NAT support is available since kernel 3.7. mangle: This table is used for specialized packet alteration. Until kernel 2.4.17 it had two built-in chains: PREROUTING (for altering incoming packets before routing) and OUTPUT (for altering locally-generated packets before routing). Since kernel 2.4.18, three other built-in chains are also supported: INPUT (for packets coming into the box itself), FORWARD (for altering packets being routed through the box), and POSTROUTING (for altering packets as they are about to go out). raw: This table is used mainly for configuring exemptions from connection tracking in combination with the NOTRACK target. It registers at the netfilter hooks with higher priority and is thus called before ip_conntrack, or any other IP tables. It provides the following built-in chains: PREROUTING (for packets arriving via any network interface) OUTPUT (for packets generated by local processes) security: This table is used for Mandatory Access Control (MAC) networking rules, such as those enabled by the SECMARK and CONNSECMARK targets. Mandatory Access Control is implemented by Linux Security Modules such as SELinux. The secu‐ rity table is called after the filter table, allowing any Discretionary Access Control (DAC) rules in the filter table to take effect before MAC rules. This table provides the following built-in chains: INPUT (for packets coming into the box itself), OUTPUT (for altering locally-generated packets before routing), and FORWARD (for altering packets being routed through the box). 2. Let us start a basic HTTP server using the python utility. Shell $ python3 -m http.server Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... 3. How we can list firewall rules using iptables command. $ sudo iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Explanation of switch used -v, --verbose Verbose output. -n, --numeric Numeric output. IP addresses and port numbers will be printed in numeric format. By default, the program will try to dis‐ play them as host names, network names, or services (whenever applicable). -L, --list [chain] List all rules in the selected chain. 4. Access HTTP server listening on 8000 port which we started using python utility. $ curl -s -D - -o /dev/null http://localhost:8000 HTTP/1.0 200 OK Server: SimpleHTTP/0.6 Python/3.10.6 Date: Sat, 14 Jan 2023 01:28:02 GMT Content-type: text/html; charset=utf-8 Content-Length: 2571 Note: -s hides the progress bar -D - dump headers to stdout indicated by - -o /dev/null send output (HTML) to /dev/null essentially ignoring it # In http server, we can see GET entry. $ python3 -m http.server Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... 127.0.0.1 - - [14/Jan/2023 06:00:37] "GET / HTTP/1.1" 200 - 5. Block or Drop incoming traffic to 8000 port. Shell $ sudo iptables -A INPUT -p tcp --dport 8000 -j DROP # Check connectivity to port $ telnet localhost 8000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out $ curl -v http://localhost:8000 * Trying 127.0.0.1:8000... * Trying ::1:8000... * connect to ::1 port 8000 failed: Connection refused 6. We can again check the list of rules. However, switch -S provides us with a convenient way to list rules. With this switch, we can see rules in the same format as we applied them. This would help us to reuse the rules. Shell $ sudo iptables -S|grep DROP -A INPUT -p tcp -m tcp --dport 8000 -j DROP # We can also list output for only INPUT chain $ sudo iptables -L INPUT -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 1980 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 # without -n switch $ sudo iptables -L INPUT -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 1980 DROP tcp -- any any anywhere anywhere tcp dpt:8000 # without verbose option $ sudo iptables -L INPUT -n Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 7. We can also list rules with line numbers; this is particularly helpful when deleting specific rules. Shell $ sudo iptables -L --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 DROP tcp -- anywhere anywhere tcp dpt:8000 Chain FORWARD (policy DROP) num target prot opt source destination 1 DOCKER-USER all -- anywhere anywhere 2 DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere 3 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED 8. Delete Rule. Shell # Delete 1st rule for INPUT chain. $ sudo iptables -D INPUT 1 # check connectivity again. $ telnet localhost 8000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. $ curl -s -D - -o /dev/null http://localhost:8000 HTTP/1.0 200 OK Server: SimpleHTTP/0.6 Python/3.10.6 Date: Sat, 14 Jan 2023 02:07:12 GMT Content-type: text/html; charset=utf-8 Content-Length: 2571 9. We can also delete a rule by specifying the complete rule with the -D switch. Shell $ sudo iptables -A INPUT -p tcp --dport 8000 -j DROP $ sudo iptables -S|grep INPUT -A INPUT -p tcp -m tcp --dport 8000 -j DROP $ sudo iptables -D INPUT -p tcp -m tcp --dport 8000 -j DROP $ curl -s -D - -o /dev/null http://localhost:8000 HTTP/1.0 200 OK Server: SimpleHTTP/0.6 Python/3.10.6 Date: Sat, 14 Jan 2023 02:13:39 GMT Content-type: text/html; charset=utf-8 Content-Length: 2571 That's it for this article. I hope this article will help you to have a basic understanding of iptables commands.

By Chandra Shekhar Pandey
Implementing Infinite Scroll in jOOQ
Implementing Infinite Scroll in jOOQ

In this article, we cover keyset pagination and infinite scroll via jOOQ. The schema used in the examples is available here. You may also like "We need tool support for keyset pagination." jOOQ Keyset Pagination Keyset (or seek) pagination doesn't have a default implementation in Spring Boot, but this shouldn't stop you from using it. Simply start by choosing a table's column that should act as the latest visited record/row (for instance, the id column), and use this column in the WHERE and ORDER BY clauses. The idioms relying on the ID column are as follows (sorting by multiple columns follows this same idea): SQL SELECT ... FROM ... WHERE id < {last_seen_id} ORDER BY id DESC LIMIT {how_many_rows_to_fetch} SELECT ... FROM ... WHERE id > {last_seen_id} ORDER BY id ASC LIMIT {how_many_rows_to_fetch} Or, like this: SQL SELECT ... FROM ... WHERE ... AND id < {last_seen_id} ORDER BY id DESC LIMIT {how_many_rows_to_fetch} SELECT ... FROM ... WHERE ... AND id > {last_seen_id} ORDER BY id ASC LIMIT {how_many_rows_to_fetch} Expressing these queries in jOOQ should be a piece of cake. For instance, let's apply the first idiom to the PRODUCT table via PRODUCT_ID: Java List<Product> result = ctx.selectFrom(PRODUCT) .where(PRODUCT.PRODUCT_ID.lt(productId)) .orderBy(PRODUCT.PRODUCT_ID.desc()) .limit(size) .fetchInto(Product.class); In MySQL, the rendered SQL is (where productId = 20 and size = 5) as follows: SQL SELECT `classicmodels`.`product`.`product_id`, `classicmodels`.`product`.`product_name`, ... FROM `classicmodels`.`product` WHERE `classicmodels`.`product`.`product_id` < 20 ORDER BY `classicmodels`.`product`.`product_id` DESC LIMIT 5 This was easy! You can practice this case in KeysetPagination for MySQL. In the same place, you can find the approach for PostgreSQL, SQL Server, and Oracle. However, keyset pagination becomes a little bit trickier if the WHERE clause becomes more complicated. Fortunately, jOOQ saves us from this scenario via a synthetic clause named SEEK. Let's dive into it! The jOOQ SEEK Clause The jOOQ synthetic SEEK clause simplifies the implementation of keyset pagination. Among its major advantages, the SEEK clause is type-safe and is capable of generating/emulating the correct/expected WHERE clause (including the emulation of row value expressions). For instance, the previous keyset pagination example can be expressed using the SEEK clause, as shown here (productId is provided by the client): Java List<Product> result = ctx.selectFrom(PRODUCT) .orderBy(PRODUCT.PRODUCT_ID) .seek(productId) .limit(size) .fetchInto(Product.class); Note that there is no explicit WHERE clause. jOOQ will generate it on our behalf based on the seek() arguments. While this example may not look so impressive, let's consider another one. This time, let's paginate EMPLOYEE using the employee's office code and salary: Java List<Employee> result = ctx.selectFrom(EMPLOYEE) .orderBy(EMPLOYEE.OFFICE_CODE, EMPLOYEE.SALARY.desc()) .seek(officeCode, salary) .limit(size) .fetchInto(Employee.class); Both officeCode and salary are provided by the client, and they land into the following generated SQL sample (where officeCode = 1, salary = 75000, and size = 10): SQL SELECT `classicmodels`.`employee`.`employee_number`, ... FROM `classicmodels`.`employee` WHERE (`classicmodels`.`employee`.`office_code` > '1' OR (`classicmodels`.`employee`.`office_code` = '1' AND `classicmodels`.`employee`.`salary` < 75000)) ORDER BY `classicmodels`.`employee`.`office_code`, `classicmodels`.`employee`.`salary` DESC LIMIT 10 Check out the generated WHERE clause! I am pretty sure that you don't want to get your hands dirty and explicitly write this clause. How about the following example? Java List<Orderdetail> result = ctx.selectFrom(ORDERDETAIL) .orderBy(ORDERDETAIL.ORDER_ID, ORDERDETAIL.PRODUCT_ID.desc(), ORDERDETAIL.QUANTITY_ORDERED.desc()) .seek(orderId, productId, quantityOrdered) .limit(size) .fetchInto(Orderdetail.class); And the following code is a sample of the generated SQL (where orderId = 10100, productId = 23, quantityOrdered = 30, and size = 10): SQL SELECT `classicmodels`.`orderdetail`.`orderdetail_id`, ... FROM `classicmodels`.`orderdetail` WHERE (`classicmodels`.`orderdetail`.`order_id` > 10100 OR (`classicmodels`.`orderdetail`.`order_id` = 10100 AND `classicmodels`.`orderdetail`.`product_id` < 23) OR (`classicmodels`.`orderdetail`.`order_id` = 10100 AND `classicmodels`.`orderdetail`.`product_id` = 23 AND `classicmodels`.`orderdetail`.`quantity_ordered` < 30)) ORDER BY `classicmodels`.`orderdetail`.`order_id`, `classicmodels`.`orderdetail`.`product_id` DESC, `classicmodels`.`orderdetail`.`quantity_ordered` DESC LIMIT 10 After this example, I think it is obvious that you should opt for the SEEK clause and let jOOQ do its job! Look, you can even do this: Java List<Product> result = ctx.selectFrom(PRODUCT) .orderBy(PRODUCT.BUY_PRICE, PRODUCT.PRODUCT_ID) .seek(PRODUCT.MSRP.minus(PRODUCT.MSRP.mul(0.35)), val(productId)) .limit(size) .fetchInto(Product.class); You can practice these examples in SeekClausePagination, next to the other examples, including using jOOQ-embedded keys as arguments of the SEEK clause. Implementing Infinite Scroll Infinite scroll is a classical usage of keyset pagination and is gaining popularity these days. For instance, let's assume that we plan to obtain something, as shown in this figure: So, we want an infinite scroll over the ORDERDETAIL table. At each scroll, we fetch the next n records via the SEEK clause: Java public List<Orderdetail> fetchOrderdetailPageAsc(long orderdetailId, int size) { List<Orderdetail> result = ctx.selectFrom(ORDERDETAIL) .orderBy(ORDERDETAIL.ORDERDETAIL_ID) .seek(orderdetailId) .limit(size) .fetchInto(Orderdetail.class); return result; } The fetchOrderdetailPageAsc() method gets the last visited ORDERDETAIL_ID and the number of records to fetch (size), and it returns a list of jooq.generated.tables.pojos.Orderdetail, which will be serialized in JSON format via a Spring Boot REST controller endpoint defined as @GetMapping("/orderdetail/{orderdetailId}/{size}"). On the client side, we rely on the JavaScript Fetch API (of course, you can use XMLHttpRequest, jQuery, AngularJS, Vue, React, and so on) to execute an HTTP GET request, as shown here: JavaScript const postResponse = await fetch('/orderdetail/${start}/${size}'); const data = await postResponse.json(); For fetching exactly three records, we replace ${size} with 3. Moreover, the ${start} placeholder should be replaced by the last visited ORDERDETAIL_ID, so the start variable can be computed as the following: Java start = data[size-1].orderdetailId; While scrolling, your browser will execute an HTTP request at every three records, as shown here: http://localhost:8080/orderdetail/0/3 http://localhost:8080/orderdetail/3/3 http://localhost:8080/orderdetail/6/3 … You can check out this example in SeekInfiniteScroll. Infinite Scrolling and Dynamic Filters Now, let's add some filters for ORDERDETAIL that allows a client to choose the price and quantity ordered range, as shown in this figure: We can easily implement this behavior by fusing the powers of SEEK and SelectQuery: Java public List<Orderdetail> fetchOrderdetailPageAsc( long orderdetailId, int size, BigDecimal priceEach, Integer quantityOrdered) { SelectQuery sq = ctx.selectFrom(ORDERDETAIL) .orderBy(ORDERDETAIL.ORDERDETAIL_ID) .seek(orderdetailId) .limit(size) .getQuery(); if (priceEach != null) { sq.addConditions(ORDERDETAIL.PRICE_EACH.between( priceEach.subtract(BigDecimal.valueOf(50)), priceEach)); } if (quantityOrdered != null) { sq.addConditions(ORDERDETAIL.QUANTITY_ORDERED.between( quantityOrdered - 25, quantityOrdered)); } return sq.fetchInto(Orderdetail.class); } The following example URL involves loading the first page of three records that have prices between 50 and 100 and an order quantity between 50 and 75: http://localhost:8080/orderdetail/0/3?priceEach=100&quantityOrdered=75 You can find the complete example in SeekInfiniteScrollFilter for MySQL, SQL Server, PostgreSQL, and Oracle.

By Anghel Leonard CORE
Visual Network Mapping Your K8s Clusters To Assess Performance
Visual Network Mapping Your K8s Clusters To Assess Performance

Building performant services and systems is at the core of every business. Tons of technologies emerge daily, promising capabilities that help you surpass your performance benchmarks. However, production environments are chaotic landscapes that exact a heavy performance toll when not maintained and monitored. Although Kubernetes is the defacto choice for container orchestration, many organizations fail to implement it. Growing organizations, in the process of upscaling their services, unintentionally introduce complexities into the system. Knowing how the infrastructure is set up and how clusters operate and communicate are crucial. Most of the infrastructural setup is tranched into a network of systems to communicate and share the workloads. If only we could visually see how the systems are connected and the underlying factors. Mapping the network using an efficient tool for visualization and assessment is essential for monitoring and maintaining services. Introduction To Visual Network Mapping Network mapping is the process of identifying and cataloging all the devices and connections within a network. A visual network map is a graphical representation of the network that displays the devices and the links between them. Visual network maps can provide a comprehensive understanding of a network's topology and identify potential problems or bottlenecks, allowing for modifications and expansion plans that can significantly improve troubleshooting, planning, analysis, and monitoring. Open-source security tools, such as OpenVAS, Nmap, and Nessus, can be used to conduct network mapping and generate visual network maps. These tools are freely available, making them a cost-effective option for organizations looking to improve their network security. Furthermore, many open-source security tools also offer active community support, enabling users to share knowledge, tips, and best practices for using the tool to its full potential. Benefits of Using Visual Network Maps An effective tool for planning and developing new networks, expanding or modernizing existing networks, and analyzing network problems or issues is a visual network map. A proper setup of visual network maps can exponentially augment the monitoring, tracking, and remediation capabilities. It can give you a clear and complete picture of the network, enabling you to pinpoint the issue’s potential source and resolve it then and there, or it can assist you in real-time network monitoring and notify you of any changes or problems beforehand. Introduction to Caretta and Grafana Caretta is an open-source network visualization and monitoring tool that enables real-time network viewing and monitoring. Grafana is an open-source data visualization and monitoring platform that enables you to create customized dashboards and alerts as well as examine and analyze data. An effective solution for comprehending and managing your network can be created by combining Caretta and Grafana. How Caretta Uses eBPF and Grafana Caretta’s reason for existence is to help you understand the topology and the relationships between devices in distributed environments. It offers various capabilities such as device discovery, real-time monitoring, alerts, notifications, and reporting. It uses Victoria Metrics to gather and publish its metrics, and any Prometheus-compatible dashboard can use the results. Carreta makes it possible to accept typical control-plane node annotations by enabling tolerations. It gathers network information, such as device and connection details, using the eBPF (extended Berkeley Packet Filter) kernel functionality and then uses the Grafana platform to present the information in a visual map. Grafana’s Role in Visualizing Caretta’s Network Maps Grafana is designed to be a modular and flexible tool that integrates and onboards a wide range of data sources and custom applications with simplicity. Due to its customizable capabilities, you can modify how the network map is presented using the Grafana dashboard. Additionally, you can pick from several visualization options to present the gathered data in an understandable and helpful way. Grafana is crucial for both showing the network data that Caretta has gathered and giving users a complete picture of the network. Using Caretta and Grafana To Create a Visual Network Map To use Caretta and Grafana for creating a visual network map, you must set up, incorporate, and configure them. The main configuration item is the Caretta daemonset. You must deploy the Caretta daemonset to the cluster of choice that will collect the network metrics into a database and set up the Grafana data source to point to the Caretta database to see the network map. Prerequisites and Requirements for Using Caretta and Grafana Caretta is a modern tool integrated with advanced features. It relies on Linux kernel version >= 4.16 and x64 bit system helm chart. Let's dive in and see how to install and configure this brilliant tool combination. Steps for Installing and Configuring Caretta and Grafana With an already pre-configured helm chart, installing Caretta is just a few calls away. The recommendation is to install Caretta in a new, unique namespace. helm repo add groundcover https://helm.groundcover.com/ helm repo update helm install caretta --namespace caretta --create-namespace groundcover/caretta The same can be applied to installing Grafana. helm install --name my-grafana --set "adminPassword=secret" \n --namespace monitoring -f custom-values.yaml stable/grafana Our custom-values.yaml will look something like below: ## Grafana configuration grafana.ini: ## server server: protocol: http http_addr: 0.0.0.0 http_port: 3000 domain: grafana.local ## security security: admin_user: admin admin_password: password login_remember_days: 1 cookie_username: grafana_admin cookie_remember_name: grafana_admin secret_key: hidden ## database database: type: rds host: mydb.us-west-2.rds.amazonaws.com ## session session: provider: memory provider_config: "" cookie_name: grafana_session cookie_secure: true session_life_time: 600 ## Grafana data persistence: enabled: true storageClass: "-" accessModes: - ReadWriteOnce size: 1Gi Configuration You can configure Caretta using helm values. Values in Helm are a chart’s setup choices. When the chart is installed, you can change the values listed in a file called values.yaml, which is part of the chart package, and customize the configurations based on the requirement at hand. An example of configuration overwriting default values is shown below: pollIntervalSeconds: 15 # set metrics polling interval tolerations: # set any desired tolerations - key: node-role.kubernetes.io/control-plane operator: Exists effect: NoSchedule config: customSetting1: custom-value1 customSetting2: custom-value2 victoria-metrics-single: server: persistentVolume: enabled: true # set to true to use persistent volume ebpf: enabled: true # set to true to enable eBPF config: someOption: ebpf_options The pollIntervalSeconds sets the interval at which metrics are polled. In our case, we have set it to poll every 15 seconds. The tolerations section allows specifying tolerations for the pods. In the shown example, pods are allowed only to run on nodes that have the node-role.kubernetes.io/control-plane label and exist with the effect NoSchedule. The config section allows us to specify custom configuration options for the application. The victoria-metrics-single section allows us to configure the Victoria-metrics-single server. Here, it is configuring the persistent volume as enabled. The eBPF section allows us to enable eBPF and configure its options. Creating a Visual Network Map With Caretta and Grafana Caretta consists of two parts: the “Caretta Agent” and the “Caretta Server.” Every node in the cluster runs the Caretta Agent Kubernetes DaemonSet, which collects information about the cluster’s status. You will need to include the data gathered by Caretta in Grafana in order to view it as a network map and generate a visual network map. apiVersion: apps/v1 kind: DaemonSet metadata: name: caretta-depoy-test namespace: caretta-depoy-test spec: selector: matchLabels: app: caretta-depoy-test template: metadata: labels: app: caretta-depoy-test spec: containers: - name: caretta-depoy-test image: groundcover/caretta:latest command: ["/caretta"] args: ["-c", "/caretta/caretta.yaml"] volumeMounts: - name: config-volume mountPath: /caretta volumes: - name: config-volume configMap: name: caretta-config Data from the Caretta Agent is received by the Caretta Server, a Kubernetes StatefulSet, which then saves it in a database. apiVersion: apps/v1 kind: StatefulSet metadata: name: caretta-depoy-test labels: app: caretta-depoy-test spec: serviceName: caretta-depoy-test replicas: 1 selector: matchLabels: app: caretta-depoy-test template: metadata: labels: app: caretta-depoy-test spec: containers: - name: caretta-depoy-test image: groundcover/caretta:latest env: - name: DATABASE_URL value: mydb.us-west-2.rds.amazonaws.com ports: - containerPort: 80 name: http To accomplish this, you will need to create a custom data source plugin in Grafana to connect to Caretta’s data and then develop visualizations in Grafana to show that data. [datasources] [datasources.caretta] name = caretta-deploy-test type = rds url = mydb.us-west-2.rds.amazonaws.com access = proxy isDefault = true Customization Options for the Network Map and How to Access Them The network map that Caretta and Grafana produced can be customized in a variety of ways. We can customize the following: Display options: With display customization options, you have control over the layout of the map, the thickness, and the color of the connections and devices. Data options: With data options, you may select which information, including warnings, performance metrics, and details about your device and connection, is shown on the map. Alerting options: With alerting options, you can be informed of any network changes or problems, such as heavy traffic, sluggish performance, or connectivity problems. Visualization options: With visualization options, you can present the gathered data in an understandable and useful way. Usually, you’ll need to use the Grafana dashboard to access these and other customization options. Depending on the version of Caretta and Grafana you are running and your particular setup and needs, you will have access to different options and settings. Interpreting and Using the Visual Network Map The primary goals of a visual network map made with Caretta and Grafana are aiding in network topology comprehension, the identification of possible bottlenecks or problems, and the planning and troubleshooting of network problems. You must comprehend the various components of the map and what they stand for in order to interpret and use the visual network map. Some of the types of information that may be displayed on the map are: Devices: The network’s endpoints, including servers, switches, and routers, are presented on the map. Connections: The connections between devices, such as network cables, wireless connectivity, or virtual connections, and sometime the connectivity type may be depicted on the map. Data: Performance indicators, alarms, and configuration information will be displayed on the maps. Tips for Using the Network Map To Assess Performance in Your K8s Cluster Creating a curated, informative and scalable network map is more challenging than it sounds. But with a proper tool set, this becomes manageable. We have seen what we can accomplish using Caretta and Grafana together. Now, let's see what we need to consider for using network maps that showcase the performance metrics of your Kubernetes clusters. First and foremost, understand the network topology of the cluster, including the physical and virtual networks that your services run on. Next, ensure that the network plugin that you are using is compatible with your application. Finally, define network policies to secure communication between pods, control ingress, and egress traffic, monitor, and troubleshoot. Understand pod-to-pod communication and pod networking is happening. Conclusion Breaking down large systems into microservices, making systems distributed, and orchestrating them is the most followed approach to boost performance and uptime. Kubernetes and Docker are the market leaders here. As performant as it is, observability is a concern in large-scale distributed systems. We need to consider all the influencing outliers and anomalies to monitor and enhance the overall system with optimal performance in mind. New technologies make innovations and advancements easy but introduce unknown impediments to the system. You need an observability tool that can track all the network operations and present them in an efficient and informative way. Grafana is the leading tool in the monitoring space. By combining Caratta, an open-source network visualization, and monitoring tool, with Grafana, we can unlock the true value of our infrastructure.

By Anton Lawrence CORE
A Complete Guide to AngularJS Testing
A Complete Guide to AngularJS Testing

AngularJS is a very powerful JavaScript framework. Many organizations use this framework to build their front-end single-page applications easily and quickly. With AngularJS, you can create reusable code to reduce code duplication and easily add new features. According to the stats, 4,126,784 websites are AngularJS customers. As AngularJS has gained popularity in the web development community, its testing frameworks have had to grow along with it. Currently, the most popular framework for unit testing Angular applications is Jasmine, and several others are gaining traction. In this article on AngularJS testing, let’s understand the difference between Angular and AngularJS, top features, benefits, testing methodologies, and components. However, before we get into details, let us first understand the basics of AngularJS. What Is AngularJS? AngularJS is a robust JavaScript framework for building complex, single-page web applications. It’s based on the MVC (Model-View-Controller) design pattern, and it uses dependency injection to make your code more modular and testable. AngularJS emphasizes cleanliness and readability; its syntax is lightweight, consistent, and simple to read. The framework also allows you to separate presentation from business logic easily — it’s ideal for small and large projects with complex client requirements. AngularJS has been used in production by several large companies, such as Google and Microsoft, as well as other organizations like NASA. It was created by Google employee Misko Hevery, who still maintains the development of the framework. And it’s open-source software released under a BSD license, so it’s free to use commercially. There are different versions of AngularJS available in the market. The first Angular version 1.0 was released in 2010 by Google. Angular 2.0 was released in September 2016. Angular 4.0 and 5.0 versions were released in March 2017 and November 2017, respectively. Google provides all the necessary support to this framework, and with a broad developer community, the features and functionalities are always up to date. Let’s now understand the importance of AngularJS. Why AngularJS? The following are the main justifications for choosing AngularJS as your go-to framework: AngularJS allows you to work with components, and hence these components can be reused, which saves time and unnecessary effort spent in coding. It is a great framework that allows you to create Rich Internet Applications. It allows developers to write client-side applications using JavaScript in a Model View Controller (MVC) architecture. It is an open-source, free framework, meaning an active developer community is contributing to it. Top Features for AngularJS AngularJS is a JavaScript framework that has quickly gained popularity because of its powerful features. The framework is being majorly used for building client-side web applications. It is designed to make the development process easier, faster, and more efficient. The framework accomplishes this by providing two-way data binding, dependency injection, a modular architecture, and much more. Let’s look at some of the top features of AngularJS: Model View Controller (MVC) Architecture MVC is a popular architecture with three main components: Model: Used to manage the application data requirements. View: Used for displaying the required application data. Controller: Helps to connect the model and the view component. It is about splitting your application into three components and performing the coding requirements. This is done in AngularJS, where we can effectively manage our coding with less time and effort. Data Model Binding There is a complete synchronization of the model and view layers. This means that any change in data for the model layer automatically brings the changes in the view layer and vice versa. This immediate action automatically ensures the model and view are updated every time. Support for Templates The main advantage of using AngularJS is the use of template support. You can use these templates and use them effectively for your coding requirements.Apart from the above great features, there is a predefined testing framework called Karma that helps to create unit tests using AngularJS applications, which is unique. Limitations of Using AngularJS AngularJS contains many features that make it a powerful tool. However, this tool has limitations that developers should be aware of when deciding to use it, including: Finding the right set of developers to understand this complicated framework becomes challenging. There are security issues since it is a JavaScript-only framework. You have to rely on server-side authentication and authorization for securing your application. Once the user disables the executed JavaScript, nothing will be visible except the basic details. Components of AngularJS Applications Building a single-page web app with AngularJS can be as simple as linking to the JavaScript file and adding the ng-app directive to the HTML. However, this setup is only suitable for small applications. When your AngularJS app starts to grow, it’s essential to organize it into components. The component pattern is a well-established way to solve this problem in the object-oriented world. AngularJS refers to them as directives and follows the same basic principle of isolating behavior from markup. An AngularJS application consists of three main components: ng-app. ng-model. ng-bind. We will discuss how all of these three components help to create AngularJS applications. ng-app: This directive allows you to define and link an AngularJS application to HTML. ng-model: This directive binds the values of AngularJS application data to corresponding HTML controls. ng-bind: This directive binds the AngularJS Application data to HTML tags. Differences Between Angular and AngularJS AngularJS and Angular are two different frameworks, with the former being a complete and powerful JavaScript framework for building dynamic web apps, while the latter is an open-source library that adds features to the original AngularJS. AngularJS is a full-featured framework for building dynamic, single-page applications. Angular was built based on the design principles of AngularJS but is not simply an upgrade or update. As it is a different framework, it has some significant differences from AngularJS. The most basic difference between the two is that Angular is based on TypeScript, a superset of JavaScript that adds static typing and class-based object-oriented programming to an otherwise standard JavaScript language. ANGULAR ANGULARJS Angular uses components and directives. AngularJS supports MVC architecture. Angular is written in Microsoft’s TypeScript language. AngularJS is written in JavaScript. Angular is supported on popular mobile browsers. AngularJS does not support mobile browsers. It is easier to maintain and manage large applications in Angular. Difficult to maintain and manage large applications in AngularJS. Angular comes with support for the Angular CLI tool. It doesn’t have a CLI tool. Prerequisites Before Learning AngularJS There are some prerequisites that need to be followed before you start implementing or even testing the AngularJS applications. Some of them include: Knowledge of HTML, CSS, and JavaScript. JavaScript functions and error handling. Basic understanding of Document Object Model (DOM). Concepts related to Model View Controller (MVC). Basic knowledge of libraries. Angular CLI understanding and implementation. Creating AngularJS Applications Follow the steps below to create and execute the AngularJS application in a web browser: Step 1: Load the required framework using the < Script > tag. You can execute the following code in the script tag. First, enter the required source details in the src. <script> src=”https://angularjs/angle.js” </script> Step 2: Define the AngularJS application using the ng-app directive. You can execute the following code: <div ng-app = “”> ........ </div> Step 3: Define a model name using the ng-model directive. You can execute the following code: <p> Enter your Required Name: <input type = “text” ng-model = “name”></p> Step 4: Bind the above model requirements using the ng-bind directive. <p> Hello <span ng-bind = “name”></span>!</p> Step 5: You can execute the above steps on an HTML page, and the required changes are executed or validated in the web browser. Testing AngularJS Applications Using Different Methodologies AngularJS is a modern web application framework that promotes cleaner, more expressive syntax for all types of applications. With its reliance on dependency injection and convention over configuration, it can make writing applications more efficient and consistent. However, AngularJS applications must be tested to ensure they function properly. Most AngularJS developers know that the framework is based on an MVC pattern and that there are many different approaches to testing its applications. With many frameworks and libraries today, getting lost in the sea of choices is easy. In this section of this article on AngularJS testing, we’ll take a look at three different frameworks for testing AngularJS applications: Jasmine, Karma, and Protractor. Jasmine Jasmine is one of the most popular unit-testing frameworks for JavaScript. It has a strict syntax and a BDD/TDD flavor making it a great fit for AngularJS testing. Karma Karma is a JS runner created by the AngularJS team itself, and it is one of the best in AngularJS testing. Jasmine is a framework that allows you to test AngularJS code, while Karma provides various methods that make it easier to call Jasmine tests. For installing Karma, you need to install node JS on your machine. Once Node.js is installed, you can install Karma using the npm installer. Protractor Protractor is an end-to-end testing framework for AngularJS applications. It is a Node.js program built on top of WebDriverJS. Protractor runs tests against the application running in a real browser. You can use this framework for functional testing, but you are still required to write unit and integration tests. Cypress It is a JavaScript E2E testing framework used for AngularJS testing. Cypress provides various bundled packages such as Mocha, Chai, and Sinon. However, the only supportive language with Cypress is JavaScript. How to Perform AngularJS Testing? Unit testing has become a standard practice in most software companies. Before rolling out features and improvements for end-user use, testing the coding requirements before the code is released on the production server is crucial. The following aspects are covered during the testing phase: Validation of product requirements that are developed. Validation of test cases and test scenarios by the testing teams. AngularJS testing can be performed in two ways: Manual testing Automation testing Manual testing is all about executing different test cases manually, which takes considerable time and effort. This is performed by a team of manual testers where the required test cases are reviewed and validated for features and enhancements planned in every sprint. Automation testing is a far more effective and quicker way of executing the testing requirements. This can be performed using an automation testing tool that helps automate the testing approach being followed. Many organizations have shifted their focus from manual to automation testing, as this is where the actual value lies. Gone are those days of traditional testing when a large amount of time was spent setting up the testing environment and finalizing the infrastructure requirements. Cross-browser testing is essential when running a web application on different supported browsers. This important technique allows you to validate the web application functionality and other dependencies. Summary We discussed how AngularJS is a prominent open-source framework if you are trying to build single-page web applications. There are different testing methodologies that you can adopt for AngularJS testing to make sure exceptional outcomes are achieved in the long run. There is always a crucial role played by cross-browser testing platforms when testing your requirements on different supported platforms and devices.

By Shakura Banu
Simulating and Troubleshooting StackOverflowError in Kotlin
Simulating and Troubleshooting StackOverflowError in Kotlin

In this series of simulating and troubleshooting performance problems in Kotlin, let’s discuss how to simulate StackOverflow errors. StackOverflow error is a runtime error, which is thrown when a thread’s stack size exceeds its allocated memory limit. Video: To see the visual walk-through of this post, click below: Sample Program Here is a sample Kotlin program, which generates the StackOverflowError: package com.buggyapp class StackOverflowApp { fun start() { start() } } fun main() { System.`in`.read() try { println(StackOverflowApp().start()) } finally { System.`in`.read() } } You can notice the sample program contains the StackOverflowApp class. This class has a start() method, which calls itself recursively. As a result of this implementation, the start() method will be invoked infinitely. Fig: start() method repeatedly added to the thread’s stack, resulting in StackOverflowError As per the implementation, the start() method will be added to the thread’s stack frame an infinite number of times. Thus, after a few thousand iterations thread’s stack size limit would be exceeded. Once the stack size limit is exceeded, it will result in StackOverflowError. Execution When we executed the above program, as expected, java.lang.StackOverflowError was thrown in seconds: Exception in thread "main" java.lang.StackOverflowError at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) at com.buggyapp.StackOverflowApp.start(StackOverflowApp.kt:5) : : How To Diagnose ‘java.lang.StackOverflowError’? You can diagnose StackOverflowError either through a manual or automated approach. Manual Approach When an application experiences StackOverflowError, it will be either printed in the application log file or in a standard error stream. From the stack trace, you will be able to figure out which line of code causing the infinite looping. Automated Approach On the other hand, you can also use yCrash open source script, which would capture 360-degree data (GC log, 3 snapshots of thread dump, heap dump, netstat, iostat, vmstat, top, top -H,…) from your application stack within a minute and generate a bundle zip file. You can then either manually analyze these artifacts or upload them to the yCrash server for automated analysis. We used the automated approach. Once the captured artifacts were uploaded to the yCrash server, it instantly generated the below root cause analysis report highlighting the source of the problem. Fig: yCrash highlighting thread may result in StackOverflowError You can notice the yCrash tool precisely points out that the thread stack length is greater than 400 lines, and it has the potential to generate StackOverflowError. The Tool also points out the exact stack trace of the thread, which is going on an infinite loop. Using this information from the report, one can easily go ahead and fix the StackOverflowError.

By Ram Lakshmanan CORE
Comparing Flutter vs. React Native
Comparing Flutter vs. React Native

As mobile app development continues to grow in popularity, businesses are looking for ways to create cross-platform apps that can be used on a variety of devices. When we say cross-platform, we of course refer to Android and iOS. Per Statista: Android maintained its position as the leading mobile operating system worldwide in June 2021, controlling the mobile OS market with a close to 73 percent share. Google’s Android and Apple’s iOS jointly possess over 99 percent of the global market share. In this article, we will compare two popular frameworks for cross-platform development: Flutter and React Native. We will look at the pros and cons of each framework and discuss which one is better suited for use in 2023. Why Is Mobile App Development So Popular? Developing mobile applications is a steadily growing business niche. Virtually all people on the planet have mobile phones, which means a nearly unlimited number of potential users. Consequently, there are apps for almost everything nowadays. You can choose many ways to design and build an app. You can either use native methods, e.g. Swift and Objective-C for creating iOS apps and Java for Android apps. These are the official Apple / Google software programming languages, respectively, which provides support and frequently updated features. Alternately, you may use cross-platform frameworks such as Flutter or React Native. What Is Cross-platform App Development? Before we begin, let’s define the term “cross-platform app development” and divide it into two categories: Hybrid Development and Native Development. Cross-platform apps are apps that can be developed using a single codebase and function virtually identically on both iOS and Android operating systems. (In this article, we are focusing on mobile app development; we talk more about web and desktop apps here.) Hybrid Development Hybrid apps are developed with a combination of web technologies such as HTML5, CSS and JavaScript. This means that hybrid apps share some code across platforms (e.g. the HTML/CSS/JS code) and this shared code runs in a webview on the target platform. WebView apps are hybrid apps that use embedded webviews to render their user interface, within which you can use HTML5, CSS and JavaScript for customization. WebView apps will have some limitations in accessing the device API out of the box, requiring additional effort to achieve some of the same functionality as native apps. The trade-off is that these apps are cross-platform out of the box, which can be a significant time saver. Hybrid apps may look the same on both platforms but behave differently, depending on platform-specific APIs available to them. E.g. a weather app would check the API of the current location’s weather service on both platforms and return different data according to what is available on each platform. Native Development Native apps are developed with the native SDKs of their target platforms (e.g. Android or iOS). This means that they do not share any code across platforms and this shared code is written only for the targeted platform, whereas the UI is implemented using platform-specific widgets and libraries. Native apps provide a better user experience than Hybrid apps and also look more native on each platform, but they cost more to develop and take longer to release new features due to the time needed for developers to learn the APIs of the target platforms. In general, it is ideal to develop your app using the native development tool of their target platforms (e.g. Android Studio or Xcode). Flutter and React Native: Cross-Platform Frameworks Both Flutter and React Native are among the best cross-platform development frameworks available today. They both use native widgets to deliver a highly customizable, responsive UI while sharing code across different platforms. The Flutter framework is developed by Google while the React Native framework is developed by Facebook, so these tech giants have very large teams dealing with everything from the platform’s SDKs to its documentation, support, etc. While Flutter and React Native apps are both native, they are advantageous over traditional native app development because they can share a significant proportion of their codebase across platforms. According to Instagram, the amount of code shared between iOS and Android via their React Native features was over 90%. The History of Flutter and React Native Flutter was announced at the Dart Developer Summit in October of 2016. The main idea behind this cross-platform mobile app development framework is to give developers tools to build native apps for iOS and Android using one single codebase written in Google’s own Dart programming language. The first stable release (1.0) of Flutter came out on February 16, 2018. React Native started a little bit earlier than Flutter, having its first beta version released on March 2015, however, it didn’t manage to leave the beta phase until March 2017. On September 5, 2017, React Native reached version 1.0 with huge support from tech giant Facebook. As you can see both technologies are relatively new, but don’t let that fool you. React Native is already used by big players like Facebook, Instagram, Airbnb, and Uber. Flutter doesn’t boast the same big names, but has already been embraced by BMW, Toyota, eBay, and, of course, Google’s own Google Pay. How Flutter and React Native Compare The main difference between Flutter and React Native is that React Native does not compile into a native mobile language (Java, Swift, Objective-C), but rather simply runs its JavaScript code. Flutter, on the other hand, compiles its Dart language into native, which can impact performance (discussed later). Another big difference is that vanilla JavaScript (besides JSX) is used for writing components on React Native. Facebook developers recommend using Flow or TypeScript when with React due to its dynamic nature. For Flutter, Google recommends using Dart for writing code as well because of its static typing system. In our opinion, the decision on which of these 2 technologies to choose should be based more on your preferences rather than on their actual features and capabilities. Of course, it’s good to know all programming paradigms so you can easily pick up new languages and frameworks even if they’re not written in a language you’re familiar with. Programmers who are already familiar with JavaScript (ES2015+) or TypeScript/Flow will find it much easier to start working with React Native. This is particularly true of developers that have used React for the web, as there is a great deal of overlap between React and React Native. JavaScript still has a significant market share in the mobile development space, and because both React Native and Flutter allow you to choose your favorite programming language, it’s hard to say which one will be more popular in 2023. There are also other important factors that might influence this decision, such as: Companies’ preference toward a specific technology stack Developer’s familiarity with a given language/framework Availability of developers with skills necessary for using a specific technology As we have already pointed out, React Native and Flutter are both going to be significant players in 2023, so it’s up to you to choose which is the best option for your long-term goals. How Cross-platform Development Frameworks Work Although cross-platform mobile development frameworks share many concepts and features, they’re each created with different goals in mind. React Native was designed to provide native code performance combined with the ease of development that React web brings to the table. The idea is not to use a single set of shared components between iOS and Android but instead to have completely separate UIs wrapped into a single JavaScript bundle, allowing you to ship almost half of the app’s code in a single place. Flutter was created mainly to fulfill Google’s needs for… Google. In other words, another attempt at marrying fast development cycles with native code performance and building reusable UI components that can be shared between iOS and Android apps. This is why Google’s Flutter is so much faster than React Native. Flutter was also designed with the idea of making app development easier and more accessible because it allows writing code using Dart, a language that can be learned in a weekend and mastered in days or even hours depending on the developer’s skillset. This is why we believe that Flutter will be the mobile development framework of choice for companies that need to create lots of native mobile apps very quickly without sacrificing performance or features. Building Mobile Apps Is Fun Again When Google announced Flutter, developers were stunned by how well it performs in practice compared to other technologies developed specifically for the purpose of building cross-platform mobile applications. React Native’s philosophy of sharing UI code between iOS and Android was a great initiative, but due to React Native’s inherent limitations, the resulting apps cannot perform as well as native ones. Flutter comes with a lot of goodies that you will not find in any other tool today. Dart is an impressive language that has been built from the ground up for the purpose of creating mobile apps. Dart is currently the fastest language available for building Android and iOS apps, makes it easier to build performant UI components, has great IDE (integrated development environment) support with powerful autocompletion features, allows doing live coding prototyping without losing app state, and finally embraces object-oriented programming by making it mandatory. Having an opinionated framework means that Google will be able to make many important decisions for you, allowing the community to focus on what’s truly important – building apps. Flutter came with a complete toolchain and a beautiful Material Design-like set of widgets that developers can reuse in their apps. Google has also created a number of integrations with 3rd party libraries such as image-processing libraries for handling images in an efficient way, SQL databases (made accessible through abstractions), and text editors. All of this is presented to developers as a cohesive package that has been designed with speed, ease of use, productivity, and performance in mind. Pros and Cons of Flutter and React Native Apps Apps created with Flutter are indistinguishable from native ones. They come with the same performance and the same look & feel (apart from some platform-specific stylistic aspects). The main issues that people usually complain about when building apps using React Native are related to its runtime environment, which is heavier than managing separate processes for each architecture. This means that you will not be able to pull off a pure native app performance using React Native, although you can get close. Flutter does not come with the same benefits as React Native in terms of supporting existing JavaScript codebases and allowing reuse of some components shared between apps for iOS and Android. Now, let’s dive a little deeper into the technical pros and cons of these two frameworks. Pros and Cons in Terms of Native Performance React Native comes with an improved JavaScript virtual machine that is faster than V8 thanks to its JIT compiler. It also benefits from being an ahead-of-time compiled framework, which means that you are free to ship whatever codebase you need because it will be compiled into a native executable. In practice, React Native is as fast as pure native apps because it can achieve the same performance of an iOS app without requiring any changes to the iOS build settings. Flutter comes with its own Ahead-of-time compiler that will emit optimized code for both iOS and Android once you have built your project. You get native performance without having to ship the whole codebase in your application binary just like with React Native. Pros and Cons in Terms of App Size React Native apps usually come with a JavaScript runtime that weighs about 300kb gzipped, although it is possible to reduce this number by tweaking some options such as Bypass filling (which will force React Native to skip a process of filling its virtual DOM with the result of diffing it against the native UI) as well as by setting useDeveloperMode to true (which will resize images in memory and reduce their quality). Flutter comes with an ahead-of-time compiler that allows developers to ship only the codebase needed for the app they are building without having to bundle anything with it. It is possible to run Flutter inside an existing JavaScript VM if you want to, which will allow you to save on the space needed for your app. Pros and Cons in Terms of Minimal Required Sdk Version React Native can usually be built against any iOS 9+ or Android 5.0+ SDK without any problem, but it goes without saying that to achieve the best performance you should target the latest SDK versions available at the time of your release. In practice, React Native apps can be built against older iOS and Android SDKs with a limited set of features being available at runtime, although to get all the features you should still target the latest SDK versions available. Flutter apps can be built against Android version 21 (Lollipop) and newer, although it is recommended to build against the latest SDK versions available for best performance. Flutter can be run on iOS 8 or newer but calling some APIs may result in runtime crashes given that Apple has deprecated most of the APIs that Flutter uses. Pros and Cons in Terms of UI Development Flutter comes with its own set of widgets for rendering the UI, which means that you can reuse existing iOS or Android code when building Flutter apps. Some third-party libraries are available for making it easier to reuse existing native components, although this is still a work in progress as it is not easy to map Flutter widgets to existing iOS and Android UI components. React Native comes with a bridge that allows you to reuse existing iOS and Android code as JavaScript modules as well as exposing some APIs for manually creating the bridge between your native UI components and the JavaScript code that will handle rendering them. Pros and Cons in Terms of Debugging React Native comes with its own debugger that can be attached to your running app on iOS and Android, which provides developers with a preview of the current state of the JavaScript virtual machine along with various tools for inspecting memory usage or tweaking some options on the fly. Flutter comes with its own debugger as well, which can be attached to your running app on iOS and Android providing developers with a preview of the current state of the rendering engine as well as various tools for inspecting memory usage or tweaking some options on the fly. Pros and Cons in Terms of Code Reuse Between Mobile Platforms React Native comes with its own set of APIs that can be used when developing both iOS and Android. Although most companies using React Native will develop their apps on one platform first (usually iOS) before porting them to the other platform, it is also possible to write shared components between your iOS and Android applications if you so wish. Flutter apps are built with code that is platform specific, so it is not possible to share any code between your iOS and Android application. However, third-party libraries are available for making it easier to reuse existing native components. Is Flutter or React Native Easier To Learn? Both React Native and Flutter are equally easy to learn (in terms of APIs), although this will depend on the expertise of the developer. Both have a large and engaged developer community that can help new developers and is consistently creating new tools and components. For a brand-new developer with little or no coding experience, we would probably recommend starting with React Native as it comes with a set of predefined components that can be used to build iOS and Android apps, which means that you can learn one thing at a time without having to worry about learning all the APIs used for rendering views. However, we would probably recommend choosing Flutter over React Native for a developer with some coding experience, as the APIs offered by Flutter are closer to what you can find in both iOS and Android. In addition, the team behind Flutter is focusing greatly on ensuring that the development experience offered by Flutter can compete with the development experience offered by the other SDKs out there (including React Native). Flutter vs. React Native in 2023 React Native came out in 2015 and since then it has been used by many companies. The JavaScript world changes very fast so React Native has also evolved over time to include new features thanks to the contributions of the open-source community. Flutter is a much newer technology that can feel quite alien if you are coming from the Android or iOS world. Google has put a lot of effort into making it extremely easy to learn, so most people that are familiar with iOS or Android development should be able to pick it up in no time. Closing Flutter and React Native are both excellent choices for cross-platform application development. While they share some similarities, there are also some key differences that you should be aware of before deciding which one to use. Choosing the right cross-platform framework for your business application or startup app idea depends largely on your development experience, development team, and which native elements your project needs to access. We hope this article will help you make an informed decision about which framework is right for your next project. Frequently Asked Questions What is the difference between web development and mobile development? Web development and mobile development both create online and offline applications, but they do it in different ways. Mobile apps require a mobile operating system such as iOS or Android to run whereas web apps can run on any device with a web browser. What is Flutter? Flutter is an open source mobile application development framework created by Google. It allows developers to build native mobile apps for Android and iOS from a single codebase. What is React Native? React Native is an open source mobile app development framework created by Facebook. It allows developers to build native mobile apps for Android and iOS from a single codebase. Can React Native be used to develop apps for the web? No, React Native is a framework for creating native mobile apps only. However, React Native is the mobile equivalent of ReactJS, which is a popular framework for creating web applications. This article compares the two in detail. Can Flutter be used to develop apps for the web? Yes. Flutter supports the use of standards-based web technologies such as HTML, CSS, and JavaScript to generate web content. With the web support, you may compile existing Flutter code written in Dart into a browser client experience that is hosted on any website and deployed to any web server.

By Chris Fanchi
Creating a Wordle App in Jetpack Compose
Creating a Wordle App in Jetpack Compose

Part 1: Let’s Start With the Domain and the First Visual Component of Our Wordle App By now, you have probably heard of Wordle, an app that gained popularity in late 2021 and continues to attract thousands of users to this day. In order to unravel how it works and learn about Jetpack Compose, the new Android library for creating user interfaces, we are going to create a replica of this well-known app. We are going to start the design of our domain with the most basic part of it; we are going to model how we want to represent the letters. Since the initial state of our game will be a 6×5 board (and this board will be empty initially and filled little by little), we can represent these cells as a sealed class such as: sealed class WordleLetter(open val letter: String) { object EmptyWordleLetter : WordleLetter("") data class FilledWordleLetter(override val letter: String) : WordleLetter(letter) We can also add a validation to the FilledWordleLetter entity since, for convenience, we are representing the letter attribute as a String. We are going to look for it to have one and only one letter, for which we can add this check in the constructor and throw an exception in case it is not fulfilled. if (letter.count() != 1) { throw IllegalArgumentException("A WordleLetter can have one letter at most") } In addition, we also need to represent the state of each letter on our board. For this, we will use an enum class such as: enum class LetterStatus { EMPTY, NOT_CHECKED, NOT_INCLUDED, INCLUDED, MATCH } Later, we will also add the colors in which we will paint each cell, corresponding to each of its possible states. Now we have a basic representation of our letters and their possible states, we can start building the different entities that will represent each component of our board, starting once again with the letters. For this, we can create an entity that represents a letter together with its state, such as: data class BoardLetter( val letter: WordleLetter, val state: LetterStatus ) Each one of the rows of the board will be formed by a List<BoardLetter> we can call BoardRow, and the complete board will be formed by a List<BoardRow>. We will build these entities later, but for now, it is enough for us to know that this will be their representation. If we pay attention to this implementation we can see that actually, the board is an array of List<List<BoardLetter>>, but since we need to add functionality to each component of this array, I have preferred to divide it into concrete classes to make the implementation easier and clearer. But let’s not get too far ahead of ourselves yet; for now, we have the representation of a letter with its state on the board, so let’s start adding functionality to this class. The first thing we want to be able to do with our BoardLetter is be able to write a letter, but how can we do that if all the members of our entity are immutable? Easy! For it, we have used a data class that provides us with the method .copy through which instead of mutating our entity, we will be creating a new instance of the same one but with the modifications that we have specified. In addition, just as we want to add letters, we will want to remove them, and we will do exactly the same as with the creation, using the .copy method that allows us to maintain the immutability of our entity. fun setLetter(aLetter: String) = copy( letter = WordleLetter.FilledWordleLetter(aLetter), state = LetterStatus.NOT_CHECKED) fun deleteLetter() = copy( letter = WordleLetter.EmptyWordleLetter, state = LetterStatus.EMPTY) Finally, we will also add a convenience method to be able to create empty letters from which to start working. We will create this method inside a companion object to be able to invoke it without the need of having an instance of the class. fun empty() = BoardLetter(WordleLetter.EmptyWordleLetter, LetterStatus.EMPTY) Great! We already have our entity that represents a letter in our game, as well as a first approximation of the functionality we will need throughout our development. We cannot forget to write the tests for this class. I will not go into detail since they are trivial for this implementation, but they can be consulted here. Now that we have the implementation of our domain ready, we can create the Jetpack Compose representation of it. For it, we are going to create a Composable called LetterBox which will receive as a parameter the letter that we want to paint. @Composable fun LetterBox( letter: WordleLetter, state: LetterStatus ) We want this component to show the letter in question that the user has written, and we also want its background to be painted in a different color depending on the state of the letter. The simplest way to replicate this behavior would be to add the background directly to a Composable Text. However, to make it look a little more elegant, we will use a Card such that our component will look like this: @Composable fun LetterBox( letter: WordleLetter, state: LetterStatus ){ Card( shape = RoundedCornerShape(16.dp), colors = CardDefaults.cardColors(containerColor = calculateState(state)), elevation = CardDefaults.cardElevation(defaultElevation = 4.dp), modifier = Modifier.aspectRatio(1f) ) { Text( modifier = Modifier .fillMaxSize() .wrapContentHeight(), text = letter.letter, textAlign = TextAlign.Center ) } } private fun mapToBackgroundColor(state: LetterStatus) = when (state) { EMPTY, NOT_CHECKED -> Color.White NOT_INCLUDED -> Color.LightGray INCLUDED -> Color.Yellow MATCH -> Color.Green } We will take advantage of this component to map the different states of each box to a different color, following the rules of the game. Once we have created this component, we can visualize it thanks to the @Preview of Compose. @Preview @Composable fun Preview() { LetterBox( letter = WordleLetter.FilledWordleLetter("A"), state = INCLUDED ) } So much for this first installment on creating something similar to the Wordle app with Jetpack Compose. In future articles, we will create each of the rows of our board that will be composed of the components we created here, and we will finally create the complete game board, along with a dictionary to load the words we will use and all the logic related to the game. The complete code for the entire application can be found at this link. Until next time!

By Diego Ojeda
Kotlin Is More Fun Than Java And This Is a Big Deal
Kotlin Is More Fun Than Java And This Is a Big Deal

I first dabbled in Kotlin soon after its 1.0 release in 2016. For lack of paying gigs in which to use it, I started my own open-source project and released the first alpha over the Christmas holidays. I’m now firmly in love with the language. But I’m not here to promote my pet project. I want to talk about the emotional value of the tools we use, the joys and annoyances beyond mere utility. Some will tell you that there’s nothing you can do in Kotlin that you can’t do just as fine with Java. There’s no compelling reason to switch. Kotlin is just a different tool doing the same thing. Software is a product of the mind, not of your keyboard. You’re not baking an artisanal loaf of bread, where ingredients and the right oven matter as much as your craftsmanship. Tools only support your creativity. They don’t create anything. I agree that we mustn’t get hung up on our tools, but they are important. Both the hardware and software we use to create our code matter a great deal. I’ll argue that we pick these tools not only for their usefulness but also for the joy of working with them. And don’t forget snob appeal. Kotlin can be evaluated on all these three motivations. Let’s take a detour outside the world of software to illustrate. I’m an amateur photographer who spends way too much on gear, knowing full well that it doesn’t improve my work. I’m the butt of Ken Rockwell’s amusing rant: “Your camera doesn’t matter." Only amateurs believe that it does. Get a decent starter kit and then go out to interesting places and take lots of pictures, is his advice. Better even, take classes to get professional feedback on your work. Spend your budget on that instead of on fancy gear. In two words: Leica shmeica. He’s right. Cameras and gear facilitate your creativity at best, but a backpack full of it can weigh you down. A photographer creates with their eyes and imagination. You only need the camera to register the result of that process. Your iPhone Pro has superior resolution and sharpness over Henri Cartier-Bresson’s single-lens compact camera (Leica, by the way) that he used in the 1940s for The Decisive Moment. But your pics won’t come anywhere near his greatness. I didn’t take Ken’s advice to heart of course. The outlay on photo gear still exceeds what I spent on training over the years by a factor of five. Who cares, it’s my hobby budget. I shouldn’t have a need for something to desire it. I’m drawing the parallel with photography because the high-tech component makes it an attractive analogy with programming, but it’s hardly the same thing. Photography is the art of recognizing a great subject and composition when you see it, not mastering your kit. Anyone can click a button and all good cameras are interchangeable. Do you care which one Steve McCurry used for his famous photo of the Afghan girl? I don’t. On the other hand, the programmer’s relationship to their tools, especially the language, is a much more intimate one, like the musician has with their instrument. Both take thousands of hours of hard practice. An accomplished violinist can’t just pick up a cello. Similarly, you don’t migrate from Haskell to C# like you switch from Nikon to Canon. The latter is closer to swapping a Windows laptop for a Mac: far less of a deal. If like musicians, we interact with our tools eight hours a day, they must be great, not just good. Quality hardware for the programmer should be a no-brainer. It infuriates me how many companies still don’t get this. Everybody should be given the best setup that money can buy when it costs less than a senior dev’s monthly rate. There’s a joy that comes from working with a superior tool. Working with junk is as annoying as working with premium tools is delightful. The mechanical DAS keyboard I’m writing this on isn’t faster, but still, the best 150 euros ever spent on office equipment. Thirdly, there is snob appeal and pride of ownership. If utility and quality were all that mattered, nobody would be buying luxury brands. Fashion would not exist. A limousine doesn’t get you any faster from A to B – except perhaps on a social ladder. Spending 7000 on a Leica compact as an amateur is extravagant, but you can flaunt its prestigious red dot and imagine yourself a pro. If I were filthy rich, I’d get one. I would also buy a Steinway grand and love it more than it’s appropriate to love an inanimate object. Let’s look at the parallels with the most important tool the programmer has in their belt: the language. Programming is creating a new virtual machine inside the Matryoshka doll of other virtual machines that make up a running application. As for plain utility, each modern language is Turing complete and can do the job, but nobody can reasonably argue that that makes them equally useful for every job. To not overcomplicate the argument, I’ll stay within the JVM ecosystem. There is no coding job that you could implement in any of the JVM languages (Java, Kotlin, Scala, Groovy, Ceylon, Frege, etc.) which would be impossible to emulate in any of the other ones, but they differ greatly in their syntax and idioms. That, and their snob appeal. Yes, programmers turn up their noses at competitive tools, maybe more secretly than openly, but they do. I spent two years on a Scala project and attended the Scala world conference. Scala’s advanced syntactical constructs (higher kinded types, multiple argument lists) have been known to give rise to much my-language-is-better-than-yours snootiness. Don’t get me wrong: it’s impressively powerful but has a steep learning curve. It may be free to download, but when time is money, it’s expensive to master. It’s the Leica of programming languages and for some, that’s exactly the appeal: learning something hard and then boasting it’s not hard at all. It’s a long-standing Linux tradition. Kotlin has no such snob appeal. It was conceived to be a more developer-friendly Java, to radically upgrade the old syntax in ways that would never be possible in Java itself, due to the non-negotiable requirement for every new compiler to support source code written in 1999. If mere utility and snob appeal don’t apply, then the argument left to favor Kotlin over Java must be the positive experience of working with it. While coding my Kotlin project I was also studying for the OCP-17 Java exam. This proved a revealing exercise in comparative language analysis. Some features simply delight. Kotlin’s built-in null safety is wonderful, a killer feature. Don’t tell me you don’t need it because you’re such a clean coder. That betrays a naive denial of other people’s sloppiness. Most stuff you write interacts with their code. Do you plan on educating them too? Other (Java) features simply keep annoying me. Evolution forbids breaking change because each new baby must be viable and produce offspring for incremental change to occur. Likewise, the more you work with Kotlin, the more certain architectural decisions in the Java syntax stick out like ugly quirks that nature (i.e., Gosling and Goetz) can’t correct without breaking the legacy. Many things in Java feel awkward and ugly for that very reason. Nobody would design a language with fixed-length arrays syntactically distinct from other collection types. Arrays can take primitive value types (numbers and booleans), which you need to (un)box in an object for lists, sets, and maps. You wouldn’t make those arrays mutable and covariant. I give you a bag of apples, you call it a bag of fruit, insert a banana, and give it back to me. Mayhem! The delight of working with a language that doesn’t have these design quirks is as strong as the annoyance over a language that does. I make no excuses for the fact that my reaction is more emotional than rational. To conclude, I don’t want to denigrate what Java designers have achieved over the years. They’re smarter than me, and they didn’t have a crystal ball. In twenty years, the Kotlin team may well find out that they painted themselves in a corner over some design decision they took in 2023. Who knows. I’ll be retired and expect to be coding only for pleasure, not to impress anyone, or even be useful.

By Jasper Sprengers CORE
How to Create Column Charts With JavaScript
How to Create Column Charts With JavaScript

With data everywhere around, we should know how to graphically represent it to better (and faster) understand what it tells us. One of the most common data visualization techniques is column charts, and I want to show you how you can easily create interactive ones using JavaScript. A column chart is a simple yet powerful way to display data when you need to compare values. From this tutorial, you will learn to make its different variations — basic single-series, multi-series, stacked, and 100% stacked column graphs — and apply effective customizations in a few more lines of JS code. As a cricket fan, I thoroughly watched the ICC Men’s T20 World Cup held last month in Australia. I decided to use some data related to the championship for illustrative visualizations. JavaScript column charts built throughout this tutorial will let us look into the batting statistics and, more precisely, the number of runs scored by the top 10 batsmen at the tournament. Let’s have fun learning! 1. Basic JS Column Chart A basic JavaScript column chart can be built very easily in just four steps. Let me show you what is to be done in each of these steps, along with explaining every line of code that I will write. Create a container. Include script files. Prepare data. Write a visualization code. A. Create a Container First of all, you need to set up a place for your chart. If you already have a web page where you want to put it, open your HTML file, and if not, create one from scratch. Then add a block-level HTML element and give it an ID. Also, set its width, height, and other styling parameters to suit your requirements. I have created a very basic HTML page, added a <div> element with the “container” id, and specified its width and height as 100% so that the resulting JS-based column chart fills the whole page: HTML <html> <head> <title>JavaScript Column Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> </body> </html> B. Include Script Files The easiest way to quickly create an interactive chart for the web is to use one of the existing JavaScript charting libraries. They are sets of pre-written charting code, which makes it possible to build data visualizations with minimal additional coding efforts. The steps for creating a column chart are basically the same regardless of the specific library. Whichever you opt for, include it in your web page by referencing its JavaScript file(s) in the <script> tag in the <head> section. Then add another <script> tag anywhere in the <head> or <body> section — it’s where the column charting code will be placed. In this tutorial, to illustrate the process, I will be using one called AnyChart. It is a lightweight JS charting library with detailed documentation and many examples, free for non-commercial purposes. So, I include its base module: HTML <html> <head> <title>JavaScript Column Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> </body> </html> C. Prepare Data Next, prepare the data you want to visualize in a column chart. I collected the total runs statistics for the ICC Men's T20 World Cup’s top 10 scorers from ESPNcricinfo and collated them in a simple JavaScript multidimensional array. (Of course, you may use a different data format like JSON, XML, CSV, and so on.) JavaScript [ ["Virat Kohli", "296", "India"], ["Max O'Dowd", "242", "Netherlands"], ["Suryakumar Yadav", "239", "India"], ["JD Butler", "225", "England"], ["Kusal Mendis", "223", "Sri Lanka"], ["Sikandar Raza", "219", "Zimbabwe"], ["Pathum Nissanka", "214", "Sri Lanka"], ["AD Hales", "212", "England"], ["Lorkan Tucker", "204", "Ireland"], ["Glenn Phillips", "201", "New Zealand"] ] D. Write a Visualization Code The ground is set, the players are ready, the toss is done, and now it is time for the match to begin! Creating a column chart with a JS charting library is like hitting a sixer in cricket — less effort and more reward. Let me show you how to get it up and running by writing a few lines of JavaScript code. The first thing that I do is add the anychart.onDocumentReady() function inside my <script> tag in the <body> section. Everything else will go into this function. HTML <script> anychart.onDocumentReady(function() { // The following JS code to create a column chart. }); </script> Then I create a JS column chart instance using the inbuilt function and add a series with the prepared data. JavaScript // create a column chart var chart = anychart.column(); // create a data series var series = chart.column([ ["Virat Kohli", "296", "India"], ["Max O'Dowd", "242", "Netherlands"], ["Suryakumar Yadav", "239", "India"], ["JD Butler", "225", "England"], ["Kusal Mendis", "223", "Sri Lanka"], ["Sikandar Raza", "219", "Zimbabwe"], ["Pathum Nissanka", "214", "Sri Lanka"], ["AD Hales", "212", "England"], ["Lorkan Tucker", "204", "Ireland"], ["Glenn Phillips", "201", "New Zealand"] ]); It is always a good practice to add titles to the axes and chart itself to make it obvious what is represented. Let’s set these: JavaScript // add axis titles chart.xAxis().title("Batsman"); chart.yAxis().title("Number of runs"); // add a chart title chart.title("Top 10 Run Scorers at ICC Men's T20 World Cup 2022"); Lastly, I set the container element — here’s where its ID is needed — and make the resulting column chart visualization appear. JavaScript // set the container element chart.container("container"); // display the chart chart.draw(); Just in case, here’s how the entire JS code within the <script> tag currently looks: JavaScript anychart.onDocumentReady(function () { // create a column chart var chart = anychart.column(); // create a data series var series = chart.column([ ["Virat Kohli", "296", "India"], ["Max O'Dowd", "242", "Netherlands"], ["Suryakumar Yadav", "239", "India"], ["JD Butler", "225", "England"], ["Kusal Mendis", "223", "Sri Lanka"], ["Sikandar Raza", "219", "Zimbabwe"], ["Pathum Nissanka", "214", "Sri Lanka"], ["AD Hales", "212", "England"], ["Lorkan Tucker", "204", "Ireland"], ["Glenn Phillips", "201", "New Zealand"] ]); // add axis titles chart.xAxis().title("Batsman"); chart.yAxis().title("Number of runs"); // add a chart title chart.title("Top 10 Run Scorers at ICC Men's T20 World Cup 2022"); // set the container element chart.container("container"); // display the chart chart.draw(); }); Result 1: Column Chart Voilà! A functional basic JavaScript column chart is done! You can find the interactive version of this diagram with the full source code on Playground. Column charts are designed to facilitate comparisons. Here, I can see how Virat Kohli is quite ahead of the pack, with the rest near each other. But it’s just the beginning! Now I also wonder how each of these players scored those runs. More precisely, I want to find out how many runs out of the total are scored by hitting a six, a four, or by running between the wickets. A multi-series column chart or a stacked column chart would perfectly represent that. So, let’s dive deeper into column charting in JS, and I will show you how to make both and then beautify the entire visualization! 2. Basic JS Multi-Series Column Chart Just like a single-series column chart, a multi-series column chart can be made using JavaScript quickly and easily. Actually, the base remains the same, and you just need to change the data. Add Multi-Series Data Instead of totals, let’s add the number of runs scored by hitting (1) sixes, (2) fours, and (3) running between the wickets for each of the top 10 scorers. I take this data from the same source, ESPNcricinfo, and create a data set: JavaScript var dataSet = anychart.data.set([ ["Virat Kohli", "India", "148", "100", "48"], ["Max O'Dowd", "Netherlands", "106", "88", "48"], ["Suryakumar Yadav", "India", "81", "104", "54"], ["JD Butler", "England", "87", "96", "42"], ["Kusal Mendis", "Sri Lanka", "95", "68", "60"], ["Sikandar Raza", "Zimbabwe", "89", "64", "66"], ["Pathum Nissanka", "Sri Lanka", "114", "52", "48"], ["AD Hales", "England", "76", "76", "60"], ["Lorkan Tucker", "Ireland", "104", "76", "24"], ["Glenn Phillips", "New Zealand", "77", "76", "48"] ]); Map the Data Next, it is necessary to map this data to the three series, each indicating a category. The first series indicates the runs scored by running. One more series indicates the runs scored by hitting fours. And the third series indicates the runs scored by hitting sixes. JavaScript var firstSeriesData = dataSet.mapAs({x: 0, value: 4}); var secondSeriesData = dataSet.mapAs({x: 0, value: 3}); var thirdSeriesData = dataSet.mapAs({x: 0, value: 2}); Create the Series Now it’s time to create the three series with the respectively mapped data. JavaScript var series; series = chart.column(firstSeriesData); series = chart.column(secondSeriesData); series = chart.column(thirdSeriesData); Result 2: Multi-Series Column Chart And a basic JS multi-series column chart with grouped series is ready! You can check out its interactive version with the full source code on Playground. A grouped multi-series column chart greatly represents a breakdown by score category. But totals are also worth looking at. So, let’s create stacked columns now! 3. Basic JS Stacked Column Chart To turn grouped columns into a stacked column chart, just one quick line of JavaScript code is more than enough. Set the Value Stacking Mode Enable the value stacking mode on the Y-scale: JavaScript chart.yScale().stackMode("value"); Result 3: Stacked Column Chart There you go! Now you’ve got a basic JS stacked column chart! Its interactive visualization is available on Playground with the full source code. Now, let’s beautify it! 4. Custom JS Stacked Column Chart Depending on exactly how you want to customize your JavaScript-based stacked column chart visualization, you may want to modify different things. I will show you some important but still easy-to-implement adjustments. Adjust the Series When you hover over the interactive columns, the tooltip automatically shows the values for each category. But which one is where? Let’s name the series, and everything will become clear! At the same time, why don’t we play with the colors a little? I will paint the series in the colors of the ICC T20 World Cup 2022’s official logo. This will make the column chart look so much more personalized and aesthetically pleasing. For this, I create a function that will accept each series, its name, and the color associated with it. I will also add a stroke attribute in the function, which will be applied to each series for creating a sort of padding between each category. JavaScript var setupSeries = function (series, name, color) { series.name(name).stroke("2 #fff 1").fill(color); }; Now, I set up the three series with the function just created and give each of the series the respective name and color. JavaScript // store the series var series; // create the first series with the function series = chart.column(firstSeriesData); setupSeries(series, "Runs scored with Sixes", "#eb2362"); // create the second series with the function series = chart.column(secondSeriesData); setupSeries(series, "Runs scored with Fours", "#00b1e5"); // create the third series with the function series = chart.column(thirdSeriesData); setupSeries(series, "Running between the wickets", "#0f0449"); Add a Legend To further improve the legibility of the column chart, it is a good idea to add a legend that will show which color indicates which category. This can be done easily by just enabling the legend. I’ll just also add some font size and padding customizations. JavaScript chart.legend().enabled(true).fontSize(14).padding([10, 0, 0, 0]); Check it out; you can hide/show a specific category by clicking on the respective legend item. Enhance the Labels, Tooltip, and Title As you can see, some of the names of the batsmen are not visible on the X-axis. To rectify that, I rotate the labels so that each name can be seen. JavaScript chart.xAxis().labels().rotation(-90); The default column chart tooltip shows individual category values but not the totals. Moreover, totals are not included in the data set. But it’s easy to make them calculated automatically and then put them somewhere, for example, in the tooltip header. JavaScript chart.tooltip().titleFormat(function () { return this.x + " — " + this.points[0].getStat("categoryYSum"); }); Also, it is possible to display the values of all the categories together in the tooltip using the union mode. JavaScript chart.tooltip().displayMode("union"); Finally, let’s make the chart title a bit larger, change its font color, and add some padding. JavaScript chart.title().fontSize(20).fontColor("#2b2b2b").padding([5, 0, 0, 0]); Result 4: Customized Stacked Column Chart That’s it! The stacked column chart is all customized. Have a look at how stunning and insightful it has become! And feel free to see this interactive JS-based stacked column chart on Playground where you can also further play with its code, add your data, and so on. Looks lovely, doesn’t it? And I can distinctly see both total scores and how some batsmen have done a lot of running while some have accumulated more runs with their hits. 5. JS 100% Stacked Column Chart Finally, I want to demonstrate how you can create a 100% stacked column chart representation that can help compare individual categories across all data points in an easier manner. Switch the Column Stacking Mode Just change the stacking mode from value to percent, and your stacked column chart will become a 100% stacked column chart: JavaScript chart.yScale().stackMode("percent"); Result 5: 100% Stacked Column Chart And it’s done, the final data visualization example in this tutorial! You are welcome to check out this JavaScript-based percent stacked column chart variation with the entire code on Playground. Conclusion In this tutorial, I showed you how to create JavaScript (HTML5) column charts in different variations, such as a regular, single-series column chart, a multi-series grouped column chart, a value-stacked column chart, and a 100% stacked column chart. You also saw how you can customize them. I used the AnyChart JavaScript charting library, but there are multiple others out there at your disposal. A good thing is that fundamentally the process is similar to anyone. So you can use whichever suits your needs. Let me know if you have any questions or suggestions. As the batting scores show you, the total figures include plenty of boundaries but quite a lot of running as well. So, go on then, work hard and knock it out of the ground with more such beautiful column charts and other data visualizations!

By Shachee Swadia
How to Build Great HTML Form Controls
How to Build Great HTML Form Controls

Today I'm going to show you all the things to consider when building the perfect HTML input. Despite its seemingly simple nature, there's actually a lot that goes into it. How To Make the Control Well, we need to start somewhere. Might as well start with the control itself. HTML offers three different form controls to choose from: <input>, <textarea>, and <select>. Today, we'll use <input>, but the same rules will apply to the others. <input /> How To Make <input> Work Inputs are generally used to capture user data. To do so, they should be placed within a <form> element, but that's not quite enough. When the form is submitted, it won't know how to label the input's data. For a form to include an input's data when the form is submitted, the input needs a name attribute. You don't need state management or data binding. Just a name. <input name="data" /> How To Make the Input Accessible Now that we've made the robots happy, it's time to focus on the humans. Every input also needs a label, both for clarity and for accessibility. There are a few options: Add a <label> element with a for attribute and assign it to the input's id (explicit label). Wrap the input with a <label> element (implicit label). Add an aria-label attribute to the input. Add an aria-labeledby attribute to the input and assign it to the id of another element. Of all these options, the most reliable is an explicit label as it works across the most browsers, assistive technologies, and voice-control interfaces. Implicit labels do not work in Dragon Speech Recognition. ARIA attributes are finicky. The placeholder and title attributes are not proper labels. I recommend not wrapping everything in a <label> tag because: It's prone to include more content than what would be considered the label. This results in a poor experience for screen-reader users. It's common to add styles to the input's wrapper element. These styles may conflict with the default behavior of a <label>. In general, I prefer using a <div> to isolate the control. <div> <label for="input-id">Label</input> <input id="input-id" name="data" /> </div> If you ever want an input that does not show the label, don't remove the label from the HTML. Instead, hide it with CSS or use a less reliable option. Keep the label in the markup and visually hide it with a class with these styles. These styles keep it accessible to assistive technology, while also visually removing it: .visually-hidden { position: absolute; overflow: hidden; clip: rect(0 0 0 0); width: 1px; height: 1px; margin: -1px; border: 0; padding: 0; } Note that it's still generally advised to include a visible label to avoid any confusion. A placeholder should not serve as a label. How To Choose a Type (Or Not) In addition to the different tags listed above, you can change the control's behavior by setting an input's type attribute. For example, if you wanted to accept a user's email, you can set the type attribute to "email". Input types can change the behavior or appearance of the UI. Here are just a few examples: The "number" type changes behavior by preventing non-number value entries. The "color" type changes the UI by adding a button that opens a color picker. The "date" type improves the data entry experience by offering a date-picker. The "email" type enforces built-in constraint validation on form submission. However, some input types may be false friends. Consider an input that asks for a US zip code. Only numerical entries are valid, so it might make sense to use a "number" type. However, one issue with the "number" input is that it adds a scroll event feature such that a user can scroll up on the input to increment the value or down to decrement it. For a zip code input, it's possible that a user clicks on the input, enters their zip code, then tries to scroll down the page. This would decrement the value they entered, and it's very easy for the user to miss that change. As a result, the number they entered could be wrong. In this case, it may be better to avoid the type attribute completely and use a pattern such as [0-9]* if you want to limit the input to only numeric values. In fact, the "number" type is often more problematic than it's worth. Be Descriptive Since we've briefly touched on constraint validation, it's a good time to mention descriptions. Although HTML has built-in validation attributes and there are several more robust JavaScript validation libraries, there is another effective approach to getting users to fill in proper data that can be less annoying. Tell them exactly what it is you want. Some form controls like "name" or "email" may be obvious, but for those that are not, provide a clear description for what you need. For example, if you are asking users to create a new password, tell them what the requirements are before they try to submit the form. And don't forget about assistive technology users. We can associate an input with a description through visual proximity as well as using the aria-describedby attribute. <div> <label for="password">Password</input> <input id="password" name="password" type="password" aria-describedby="password-requirements" /> <p id="password-requirements">Please create a new password. Must contain at least 8 characters, one uppercase letter, one lowercase letter, and one special character.</p> </div> Descriptions are also an effective place to put any validation feedback messages. Be Flexible When creating inputs, it's often tempting to add constraints for the acceptable values to ensure the user only sends good data. But being too strict can lead to a poor user experience. For example, if you ask the user to enter a phone number, consider that there are several different acceptable formats: 8008675309 800 867 5309 800-867-5309 800.867.5309 (800) 867-5309 +1 (800) 867-5309 001 800 867 5309 All of the above represent the same phone number. Ideally, a user would be able to enter any of these formats and still be able to submit the form without issue. If you want your input to only send number characters, it's possible to allow the user to type in whatever format they want. Then you can use JavaScript to add an event handler to the blur event, and remove any unwanted characters (space, dash, period, and so on) from the input's value. This would leave only the numbers. Make It Easy If you've ever filled out a form using a mobile device, you may have noticed that your phone's keyboard looks different on different inputs. For a basic text input you see the standard keyboard, for email inputs you may see the @ symbol more conveniently placed, and for number inputs you may see the keyboard replaced with a number pad. In many cases, the browser will choose a more appropriate keyboard to show users if the input type is set. But as we saw above, it's often better to use just a basic text input. We can still offer a nicer user experience to mobile users by asking the browser to show specific keyboards despite the input missing a type attribute. We can accomplish this with the inputmode attribute which accepts eight different options. text (default value) none decimal numeric tel search email url Want to give it a try? Head over to inputmodes.com on your mobile device. It's pretty cool. Continue Learning That's over a thousand words I had to say about creating form controls. I hope you found it useful.

By Austin Gil CORE

The Latest Coding Topics

article thumbnail
Help the Compiler, and the Compiler Will Help You: Subtleties of Working With Nullable Reference Types in C#
Readers will learn about several non-obvious nullable reference-type features. By the end, users will know how to make applications more secure and correct.
January 27, 2023
by Nikita Panevin
· 984 Views · 1 Like
article thumbnail
Real-Time Stream Processing With Hazelcast and StreamNative
In this article, readers will learn about real-time stream processing with Hazelcast and StreamNative in a shorter time, along with demonstrations and code.
January 27, 2023
by Timothy Spann
· 1,837 Views · 2 Likes
article thumbnail
AWS Cloud Migration: Best Practices and Pitfalls to Avoid
This article post will discuss the best practices and common pitfalls to avoid when migrating to the AWS cloud.
January 27, 2023
by Rahul Nagpure
· 1,708 Views · 1 Like
article thumbnail
The Quest for REST
This post focuses on listing some of the lurking issues in the "Glory of REST" and provides hints at ways to solve them.
January 26, 2023
by Nicolas Fränkel CORE
· 2,139 Views · 3 Likes
article thumbnail
Fraud Detection With Apache Kafka, KSQL, and Apache Flink
Exploring fraud detection case studies and architectures with Apache Kafka, KSQL, and Apache Flink with examples, guide images, and informative details.
January 26, 2023
by Kai Wähner CORE
· 2,441 Views · 1 Like
article thumbnail
Playwright vs. Cypress: The King Is Dead, Long Live the King?
QA automation tools are an essential part of the software development process. Let's compare Cypress and Playwright.
January 26, 2023
by Serhii Zabolenny
· 1,810 Views · 1 Like
article thumbnail
Artificial Intelligence in Drug Discovery
This article explores how TypeDB empowers scientists to make the next breakthroughs in medicine possible. This is shown with guide code examples and visuals.
January 26, 2023
by Tomás Sabat
· 1,732 Views · 2 Likes
article thumbnail
Upgrade Guide To Spring Data Elasticsearch 5.0
Learn about the latest Spring Data Elasticsearch 5.0.1 with Elasticsearch 8.5.3, starting with the proper configuration of the Elasticsearch Docker image.
January 26, 2023
by Arnošt Havelka CORE
· 2,138 Views · 1 Like
article thumbnail
Easy Smart Contract Debugging With Truffle’s Console.log
If you’re a Solidity developer, you’ll be excited to hear that Truffle now supports console logging in Solidity smart contracts. Let's look at how.
January 26, 2023
by Michael Bogan CORE
· 2,030 Views · 2 Likes
article thumbnail
DevOps Roadmap for 2022
[Originally published February 2022] In this post, I will share some notes from my mentoring session that can help you - a DevOps engineer or platform engineer, learn where to focus.
January 26, 2023
by Anjul Sahu
· 17,962 Views · 6 Likes
article thumbnail
Apache Kafka vs. Memphis.dev
This article compares the differences between Apache Kafka and Memphis.dev; it includes ecosystems, user experience, availability and messaging, etc.
January 26, 2023
by Yaniv Ben Hemo
· 1,857 Views · 1 Like
article thumbnail
What Is Policy-as-Code? An Introduction to Open Policy Agent
Learn the benefits of policy as code and start testing your policies for cloud-native environments.
January 26, 2023
by Tiexin Guo
· 3,038 Views · 1 Like
article thumbnail
Do Not Forget About Testing!
This article dives into why software testing is essential for developers. By the end, readers will understand why testing is needed, types of tests, and more.
January 26, 2023
by Lukasz J
· 2,793 Views · 1 Like
article thumbnail
Commonly Occurring Errors in Microsoft Graph Integrations and How to Troubleshoot Them (Part 3)
This third article explains common integration errors that may be seen in the transition from EWS to Microsoft Graph as to the resource type To Do Tasks.
January 25, 2023
by Constantin Kwiatkowski
· 2,123 Views · 1 Like
article thumbnail
Handling Automatic ID Generation in PostgreSQL With Node.js and Sequelize
In this article, readers will learn four ways to handle automatic ID generation in Sequelize and Node.js for PostgreSQL, which includes simple guide code.
January 25, 2023
by Brett Hoyer
· 2,058 Views · 3 Likes
article thumbnail
Key Considerations When Implementing Virtual Kubernetes Clusters
In this article, readers will receive key considerations to examine when implementing virutal Kubernetes clusters, along with essential questions and answers.
January 25, 2023
by Hanumantha (Hemanth) Kavuluru
· 3,078 Views · 3 Likes
article thumbnail
Beginners’ Guide to Run a Linux Server Securely
This article explains what you need to take some essential considerations for tackling common security risks with Linux Server.
January 25, 2023
by Hadi Samadzad
· 1,940 Views · 2 Likes
article thumbnail
How Do the Docker Client and Docker Servers Work?
This article will help you deeply understand how Docker's client-server model works and give you more insights about the Docker system.
January 25, 2023
by Eugenia Kuzmenko
· 3,041 Views · 1 Like
article thumbnail
Best Practices to Succeed at Continuous AWS Security Monitoring
This article will look at best practices to efficiently ingest, normalize, and structure their AWS logs so that security teams can implement the proper detections.
January 25, 2023
by Jack Naglieri
· 1,916 Views · 1 Like
article thumbnail
Microsoft Azure Logic Apps Service
This article is about automating processes, workflows, etc., using Platform as a Service (PaaS) from Microsoft Azure's Azure Logic Apps.
January 25, 2023
by Sardar Mudassar Ali Khan
· 1,972 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: