A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java has been a popular programming language for developing robust and scalable applications for many years. With the rise of REST APIs, Java has again proven its worth by providing numerous frameworks for building RESTful APIs. A REST API is an interface that enables communication between applications and allows them to exchange data. In this article, we'll be discussing the top four Java REST API frameworks, their pros and cons, and a CRUD example to help you choose the right one for your next project. 1. Spring Boot Spring Boot is one of the most popular Java frameworks for building REST APIs. It offers a range of features and tools to help you quickly develop RESTful services. With its built-in support for various data sources, it makes it easy to create CRUD operations for your database. Pros: Easy to use and set up. Built-in support for multiple data sources. Supports a variety of web applications, including RESTful, WebSockets, and more. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be too heavy for smaller projects. Requires a good understanding of Java and the Spring framework. Example CRUD Operations in Spring Boot: lessCopy code// Creating a resource @PostMapping("/users") public User createUser(@RequestBody User user) { return userRepository.save(user); } // Reading a resource @GetMapping("/users/{id}") public User getUserById(@PathVariable Long id) { return userRepository.findById(id).orElse(null); } // Updating a resource @PutMapping("/users/{id}") public User updateUser(@PathVariable Long id, @RequestBody User user) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); return userRepository.save(existingUser); } return null; } // Deleting a resource @DeleteMapping("/users/{id}") public void deleteUser(@PathVariable Long id) { userRepository.deleteById(id); } 2. Jersey Jersey is another Java framework for building REST APIs. It provides a simple and easy-to-use API for creating RESTful services and is widely used for building microservices. Jersey is also fully compatible with JAX-RS, making it an ideal choice for developing RESTful applications. Pros: Simple and easy to use. Compatible with JAX-RS. Ideal for building microservices. Offers a large library of plugins and modules to add additional functionality. Cons: Can be slow compared to other frameworks. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Jersey: lessCopy code// Creating a resource @POST @Consumes(MediaType.APPLICATION_JSON) public Response createUser(User user) { userRepository.save(user); return Response.status(Response.Status.CREATED).build(); } // Reading a resource @GET @Path("/{id}") @Produces(MediaType.APPLICATION_JSON) public Response getUserById(@PathParam("id") Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { return Response.ok(user).build(); } return Response.status(Response.Status.NOT_FOUND).build(); } // Updating a resource @PUT @Path("/{id}") @Consumes(MediaType.APPLICATION_JSON) public Response updateUser(@PathParam("id") Long id, User user) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); userRepository.save(existingUser); return Response.ok().build(); } return Response.status(Response.Status.NOT_FOUND).build(); } //Deleting a resource @DELETE @Path("/{id}") public Response deleteUser(@PathParam("id") Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { userRepository.delete(user); return Response.ok().build(); } return Response.status(Response.Status.NOT_FOUND).build(); } 3. Play Framework Play Framework is a high-performance framework for building REST APIs in Java. It offers a lightweight and flexible architecture, making it easy to develop and deploy applications quickly. Play is designed to work with Java 8 and Scala, making it a great choice for modern applications. Pros: Lightweight and flexible architecture. High-performance Supports Java 8 and Scala. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Play Framework: lessCopy code// Creating a resource public Result createUser() { JsonNode json = request().body().asJson(); User user = Json.fromJson(json, User.class); userRepository.save(user); return ok(); } // Reading a resource public Result getUserById(Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { return ok(Json.toJson(user)); } return notFound(); } // Updating a resource public Result updateUser(Long id) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { JsonNode json = request().body().asJson(); User user = Json.fromJson(json, User.class); existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); userRepository.save(existingUser); return ok(); } return notFound(); } // Deleting a resource public Result deleteUser(Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { userRepository.delete(user); return ok(); } return notFound(); } 4. Vert.x Vert.x is a modern, high-performance framework for building REST APIs in Java. It provides a lightweight and flexible architecture, making it easy to develop and deploy applications quickly. Vert.x supports both Java and JavaScript, making it a great choice for applications that require both. Pros: Lightweight and flexible architecture. High-performance Supports both Java and JavaScript. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Vert.x: lessCopy code // Creating a resource router.post("/").handler(routingContext -> { JsonObject user = routingContext.getBodyAsJson(); userRepository.save(user); routingContext.response().setStatusCode(201).end(); }); // Reading a resource router.get("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); JsonObject user = userRepository.findById(id).orElse(null); if (user != null) { routingContext.response().end(user.encode()); } else { routingContext.response().setStatusCode(404).end(); } }); // Updating a resource router.put("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); JsonObject user = userRepository.findById(id).orElse(null); if (user != null) { JsonObject updatedUser = routingContext.getBodyAsJson(); user.put("username", updatedUser.getString("username")); user.put("password", updatedUser.getString("password")); userRepository.save(user); routingContext.response().end(); } else { routingContext.response().setStatusCode(404).end(); } }); // Deleting a resource router.delete("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); userRepository.deleteById(id); routingContext.response().setStatusCode(204).end(); }); In conclusion, these are the top Java REST API frameworks that you can use to build robust and scalable REST APIs. Each framework has its own strengths and weaknesses, so it's important to choose the one that best fits your specific needs. Whether you're a beginner or an experienced Java developer, these frameworks offer all the tools and functionality you need to create high-performance REST APIs quickly and efficiently.
When I started working on this post, I had another idea in mind: I wanted to compare the developer experience and performance of Spring Boot and GraalVM with Rust on a demo HTTP API application. Unfortunately, the M1 processor of my MacBook Pro had other ideas. Hence, I changed my initial plan. I'll write about the developer experience of developing the above application in Rust, compared to what I'm used to with Spring Boot. The Sample Application Like every pet project, the application is limited in scope. I designed a simple CRUD HTTP API. Data are stored in PostgreSQL. When one designs an app on the JVM, the first and only design decision is to choose the framework: a couple of years ago, it was Spring Boot. Nowadays, the choice is mostly between Spring Boot, Quarkus, and Micronaut. In many cases, they all rely on the same underlying libraries, e.g., logging or connection pools. Rust is much younger; hence the ecosystem has yet to mature. For every feature, one needs to choose precisely which library to use - or to implement it. Worse, one needs to understand there's such a feature. Here are the ones that I searched for: Reactive database access Database connection pooling Mapping rows to structures Web endpoints JSON serialization Configuration from different sources, e.g., YAML, environment variables, etc. Web Framework The choice of the web framework is the most critical. I've to admit I had no prior clue about such libraries. I looked around and stumbled upon Which Rust web framework to choose in 2022. After reading the post, I decided to follow the conclusion and chose axum: Route requests to handlers with a macro-free API Declaratively parse requests using extractors Simple and predictable error handling model Generate responses with minimal boilerplate Take full advantage of the tower and tower-http ecosystem of middleware, services, and utilities. In particular, the last point is what sets axum apart from other frameworks. axum doesn’t have its own middleware system but instead uses tower::Service. This means axum gets timeouts, tracing, compression, authorization, and more, for free. It also enables you to share middleware with applications written using hyper or tonic. - axum crate documentation axum uses the Tokio asynchronous library underneath. For basic usage, it requires two crates: TOML [dependencies] axum = "0.6" tokio = { version = "1.23", features = ["full"] } axum's router looks very similar to Spring's Kotlin Routes DSL: Rust let app = Router::new() .route("/persons", get(get_all)) //1 .route("/persons/:id", get(get_by_id)) //1//2 async fn get_all() -> Response { ... } async fn get_by_id(Path(id): Path<Uuid>) -> Response { ... } A route is defined by the path and a function reference. A route can have path parameters. axum can infer parameters and bind them. Shared Objects An issue commonly found in software projects is sharing an "object" with others. We established long ago that there were better ideas than sharing global variables. Spring Boot (and similar JVM frameworks) solves it with runtime dependency injection. Objects are created by the framework, stored in a context, and injected into other objects when the application starts. Other frameworks do dependency injection at compile-time, e.g., Dagger 2. Rust has neither runtime nor objects. Configurable dependency injection is not "a thing." But we can create a variable and inject it manually where needed. In Rust, it's a problem because of ownership: Ownership is a set of rules that govern how a Rust program manages memory. All programs have to manage the way they use a computer’s memory while running. Some languages have garbage collection that regularly looks for no-longer-used memory as the program runs; in other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks. If any of the rules are violated, the program won’t compile. None of the features of ownership will slow down your program while it’s running. - "What Is Ownership?" axum provides a dedicated wrapper, the State extractor, to reuse variables across different scopes. Rust struct AppState { //1 ... } impl AppState { fn create() -> Arc<AppState> { //2 Arc::new(AppState { ... }) } } let app_state = AppState::create(); let app = Router::new() .route("/persons", get(get_all)) .with_state(Arc::clone(&app_state)); //3 async fn get_all(State(state): State<Arc<AppState>>) -> Response { //4 ... //5 } Create the struct to be shared. Create a new struct wrapped in an Atomically Reference Counted. Share the reference with all routing functions, e.g., get_all. Pass the state. Use it! Automated JSON Serialization Modern JVM web frameworks automatically serialize objects in JSON before sending. The good thing is that axum does the same. It relies on Serde. First, we add the serde and serde_json crate dependencies: TOML [dependencies] serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" Then, we annotate our struct with the derive(Serialize) macro: Rust #[derive(Serialize)] struct Person { first_name: String, last_name: String, } Finally, we return the struct wrapped in a Json and the HTTP status code in an axum Response. Rust async fn get_test() -> impl IntoResponse { //1 let person = Person { //2 first_name: "John".to_string(), last_name: "Doe".to_string() }; (StatusCode::OK, Json(person)) //3 } The tuple (StatusCode, Json) is automatically converted into a Response. Create the Person. Return the tuple. At runtime, axum automatically serializes the struct in JSON: JSON {"first_name":"Jane","last_name":"Doe"} Database Access For a long time, I used the MySQL database for my demos, but I started to read a lot of good stuff about PostgreSQL and decided to switch. I needed an asynchronous library compatible with Tokio: it's exactly what the tokio_postgres crate does. The problem with the crate is that it creates direct connections to the database. I searched for a connection pool crate and stumbled upon deadpool (sic): Deadpool is a dead simple async pool for connections and objects of any type. - Deadpool Deadpool provides two distinct implementations: An unmanaged pool: The developer has complete control - and responsibility - over the pooled objects' lifecycle. A managed pool: The crate creates and recycles objects as needed. More specialized implementations of the latter cater to different databases or "drivers", e.g., Redis and... tokio-postgres. One can configure Deadpool directly or defer to the config crate it supports. The latter crate allows several alternatives for configuration: Config organizes hierarchical or layered configurations for Rust applications. Config lets you set a set of default parameters and then extend them via merging in configuration from a variety of sources: Environment variables String literals in well-known formats Another Config instance Files: TOML, JSON, YAML, INI, RON, JSON5, and custom ones defined with Format trait Manual, programmatic override (via a .set method on the Config instance) Additionally, Config supports: Live watching and re-reading of configuration files Deep access into the merged configuration via a path syntax Deserialization via serde of the configuration or any subset defined via a path - Crate config To create the base configuration, one needs to create a dedicated structure and use the crate: Rust #[derive(Deserialize)] //1 struct ConfigBuilder { postgres: deadpool_postgres::Config, //2 } impl ConfigBuilder { async fn from_env() -> Result<Self, ConfigError> { //3 Config::builder() .add_source( Environment::with_prefix("POSTGRES") //4 .separator("_") //4 .keep_prefix(true) //5 .try_parsing(true), ) .build()? .try_deserialize() } } let cfg_builder = ConfigBuilder::from_env().await.unwrap(); //6 The Deserialize macro is mandatory. The field must match the environment prefix (see below). The function is async and returns a Result. Read from environment variables whose name starts with POSTGRES_. Keep the prefix in the configuration map. Enjoy! Note that environment variables should conform to what Deadpool's Config expects. Here's my configuration in Docker Compose: Env variable Value POSTGRES_HOST "postgres" POSTGRES_PORT 5432 POSTGRES_USER "postgres" POSTGRES_PASSWORD "root" POSTGRES_DBNAME "app" Once we have initialized the configuration, we can create the pool: Rust struct AppState { pool: Pool, //1 } impl AppState { async fn create() -> Arc<AppState> { //2 let cfg_builder = ConfigBuilder::from_env().await.unwrap(); //3 let pool = cfg_builder //4 .postgres .create_pool( Some(deadpool_postgres::Runtime::Tokio1), tokio_postgres::NoTls, ) .unwrap(); Arc::new(AppState { pool }) //2 } } Wrap the pool in a custom struct. Wrap the struct in an Arc to pass it within an axumState (see above). Get the configuration. Create the pool. Then, we can pass the pool to the routing functions: Rust let app_state = AppState::create().await; //1 let app = Router::new() .route("/persons", get(get_all)) .with_state(Arc::clone(&app_state)); //2 async fn get_all(State(state): State<Arc<AppState>>) -> Response { let client = state.pool.get().await.unwrap(); //3 let rows = client .query("SELECT id, first_name, last_name FROM person", &[]) //4 .await //5 .unwrap(); // //6 } Create the state. Pass the state to the routing functions. Get the pool out of the state, and get the client out of the pool. Create the query. Execute it. Read the row to populate the Response. The last step is to implement the transformation from a Row to a Person. We can do it with the From trait. Rust impl From<&Row> for Person { fn from(row: &Row) -> Self { let first_name: String = row.get("first_name"); let last_name: String = row.get("last_name"); Person { first_name, last_name, } } } let person = row.into(); Docker Build The last step is the building of the application. I want everybody to be able to build, so I used Docker. Here's the Dockerfile: Dockerfile FROM --platform=x86_64 rust:1-slim AS build //1 RUN rustup target add x86_64-unknown-linux-musl //2 RUN apt update && apt install -y musl-tools musl-dev //3 WORKDIR /home COPY Cargo.toml . COPY Cargo.lock . COPY src src RUN --mount=type=cache,target=/home/.cargo \ //4 && cargo build --target x86_64-unknown-linux-musl --release //5 FROM scratch //6 COPY --from=build /home/target/x86_64-unknown-linux-musl/release/rust /app //7 CMD ["/app"] Start from a standard Rust image. Add musl target so we can compile to Alpine Linux. Install the required Alpine dependencies. Cache the dependencies. Build for Alpine Linux. Start from scratch. Add the previously built binary. The final image is 7.56MB. My experience has shown that an equivalent GraalVM native compiled image would be more than 100MB. Conclusion Though it was not my initial plan, I learned about quite a few libraries with this demo app and how they work. More importantly, I've experienced what it is like to develop an app without a framework like Spring Boot. You need to know the following: Available crates for each capability Crate compatibility Version compatibility Last but not least, the documentation of most above crates ranges from average to good. I found axum's to be good; on the other hand, I didn't manage to use Deadpool correctly from the start and had to go through several iterations. The documentation quality of Rust crates is different from crate to crate. All in all, they have room for the potential to reach the level of modern JVM frameworks. Also, the demo app was quite simple. I assume that more advanced features could be more painful. The complete source code for this post can be found on GitHub. To go further: Create an Optimized Rust Alpine Docker Image How to create small Docker images for Rust Using Axum Framework To Create Rest API
TestNG is a Java-based open-source test automation framework. It covers a broader range of test categories: unit, functional, end-to-end, integration, etc. This framework is quite popular among developers and testers for test creation due to its useful features like grouping, dependence, prioritization, ease of using multiple annotations, etc. Another reason for its popularity is that it helps them organize tests in a structured way and enhances the scripts' maintainability and readability. Although it is developed along the same lines as NUnit and JUnit, its advanced features make it a much more robust framework in comparison to its peers. Choosing the Right Selenium Java Framework New generation frameworks are emerging today with multiple advantages that can mark a shift in how applications are created and used. Of course, the best framework depends on you, your team, and the goals you're trying to hit. But if your ideal choice is TestNG, this TestNG framework tutorial will guide you so that you can test anything quickly and easily. JUnit 5 vs. TestNG JUnit and TestNG are the most popular Java frameworks for automated Selenium testing. You should choose the one that suits your requirements best. Let us get to the core differences between JUnit 5 vs. TestNG. JUnit is an open-source unit testing framework for java, while TestNG is also a Java-based framework but has a wider scope for different types of testing like functional, end-to-end, unit, etc. Coming to Annotations of Junit 5 and TestNG, TestNG annotations can be used easily and has a more significant number of annotations that can be used in the test scripts. Dependency Tests are supported for TestNG only. These are tests where one method will not run unless the dependent method runs and passes. Getting Started With TestNG This section of the TestNG framework tutorial will help you get started with running automation tests with TestNG Framework. Installing TestNG in Eclipse Installing TestNG in Eclipse is used to develop and test code. There are numerous ways to install TestNG in Eclipse. We have discussed various ways to install TestNG in Eclipse. They include installing TestNG plugin in Eclipse in the following ways: Through the marketplace Without marketplace By downloading the library Create TestNG Project in Eclipse Learn how to create a TestNG project from scratch and write your very first test script. There are some prerequisites required for getting started they are Eclipse IDE and downloading Selenium WebDriver, and Client for Java. These are the next steps which are to be followed - Create a TestNG project in Eclipse Adding Selenium JAR files To Selenium TestNG Project Creating a TestNG class in Eclipse Writing Our First Test Case Using Selenium Creating a TestNG class in Eclipse and TestNG Generating TestNG Reports Running Tests With TestNG This section of the TestNG framework tutorial will act as your step-by-step guide to successfully running tests with TestNG. Automation TestNG With Selenium Let's understand automation through this TestNG framework tutorial. TestNG is a popular option for automation engineers, who can rely on its in-built features and know that they have an active community of developers behind their backs. Dive into the practical demonstration of automation using TestNG. We will cover the installation process using 2 of the most used IDEs, Eclipse and IntelliJ. To understand the automation process better, we have stressed annotations and attributes. Annotations are used to provide meaning to any function in the test script and describe its behavior. Attributes are part of annotations that help in making our tests more defined. Parallel Test Execution in TestNG The traditional approach of manual testing is already overtaken by automation testing. One widely adopted automation testing strategy is the shift from sequential testing to parallel testing. To save time and ensure maximum coverage, QA teams shift from testing sequentially to testing in parallel. Tests can be run on multiple devices, browsers, and OS combinations. You can test an app or a component of an app. Understand parallel test execution in Selenium using the configuration of TestNG XML and learn how to perform parallel testing in TestNG on a cloud grid. Creating TestNG XML File Running a single test at a time will not be very effective when you use a Selenium Grid. You need to execute multiple test cases in parallel to make the most of it. One way to execute multiple test files from a single file is to use the TestNG XML file. This file allows you to specify which test files to run and provides additional control over the execution of your tests. By using this file, you can easily manage and run multiple test files. Dive deeper into this blog about creating a TestNG XML file to know about why and how to create a TestNG XML file, and learn how to run a TestNG XML file along with its parallel execution. Automation With Selenium, Cucumber, and TestNG Get ready to run your first Cucumber script and leverage TestNG capabilities to perform parallel testing. Cucumber is a BDD framework used to write and run tests for behavior-driven development (BDD) frameworks. It allows tests to be written in a natural language that is easily understandable by developers and non-technical stakeholders. This helps to improve communication and collaboration among team members and ensures that the system being built will meet the requirements and expectations of its users. Run JUnit Selenium Tests Using TestNG JUnit is a unit testing framework for Java, while TestNG is a Java framework having a broader scope than unit testing. Both JUnit and TestNG can be used to run unit tests, but they have some differences in their approach and functionality. TestNG framework supports tests in more categories than JUnit, making it easier to migrate test cases from JUnit to TestNG. One advantage of using TestNG to run JUnit tests is that it allows you to avoid completely rewriting the test scenarios that were originally written using the JUnit framework. It can save time and effort and enable you to take advantage of TestNG's features. Diving Deep Into Advanced Use Cases for TestNG In this section of the TestNG framework tutorial, you will learn more about the TestNG testing frameworks by diving deep into the use cases for TestNG. Group Test Cases in TestNG In TestNG, a group is a set of test methods that share a common characteristic. Groups are used to selectively run or exclude specific tests from a test run. This allows you to organize your tests into logical units and easily include or exclude them from a test run based on your needs. For example, you could create groups for different types of tests, such as unit tests and integration tests, and then run only the unit tests in a test suite by specifying the unit test group. TestNG also allows you to define dependencies between groups so that tests in one group will only be run if tests in another group have been run and passed. This can be useful for ensuring that certain tests are run only after certain preconditions have been met. Prioritizing Tests in TestNG Let's understand prioritizing tests in this TestNG framework tutorial. In TestNG, tests can be prioritized using the priority attribute in the @Test annotation. This allows you to specify the order in which the tests should be executed. Tests with a lower priority value will be executed before tests with a higher priority value. This is especially useful when defining a sequence for the test case execution or when assigning precedence to some methods over others. You can also specify that certain tests should be ignored. This can be useful if you want to temporarily disable a test without deleting it. By using the priority attributes, you can control the order in which tests are executed and selectively ignore specific tests in TestNG. Assertions in TestNG TestNG is a popular testing framework for Java. One of the key features of TestNG is the ability to use assertions within your test cases. Assertions are a way to verify that the output of a piece of code matches what you expect. For example, if you are testing a method that calculates the sum of two numbers, you can use an assertion to verify that the output of the method is the expected sum. If the output does not match the expected value, the test will fail. There are various types of assertions available in TestNG, including the ability to check if a value is true or false, if two objects are equal, and if an object is null. You can also use custom messages with assertions to provide more detailed information about why a test failed. Overall, assertions in TestNG are a powerful tool for verifying the correctness of your code. They can help you identify problems with your code and ensure that it is working as expected. DataProviders in TestNG DataProvider in TestNG allows us to pass multiple parameters to a single test using only one execution cycle. This can be helpful when we need to pass multiple values to a test in just one execution cycle. To use a DataProvider in TestNG, you first need to define a method that returns an array of objects containing the data that you want to pass to the test method. This method should be annotated with the @DataProvider annotation. Then, in the test method, you can specify which DataProvider to use by referencing its name in the @Test annotation. When the test is executed, TestNG will call the DataProvider method and pass the data it returns to the test method. The test method can then use this data to perform its tests. This allows you to test a piece of code with multiple different sets of input data. They can help you ensure that your code is working correctly with different types of input and can help you identify any problems or issues with your code. Parameterization in TestNG If most of your tests are likely to have similar actions, then Parameterization may be the right tool for you. We can use Parameterization in our automation scripts, depending on the framework we're using. When your application involves inputting different types of user interactions, Parameterization is the way to go. Parameterizing your tests allows you to write fewer tests and still achieve the same coverage. TestNG Listeners in Selenium WebDriver TestNG Listeners allow you to customize test results and provide valuable information on your tests. Selenium WebDriver's TestNG Listeners are modules that listen to certain events and keep track of the test execution while performing some action at every stage of test execution. TestNG Listeners are classes that can be used to listen to events that occur during the execution of a TestNG test. They allow you to add additional functionality to your tests and can be used to perform a wide range of tasks, such as logging test results, generating reports, or sending notifications. Listeners can be implemented in your tests by annotating your test class with the@Listeners annotation and specifying the listener class or classes to be used. TestNG Annotations Annotations were first added to the Java language in JDK 5. TestNG annotations can be added to your code to control how TestNG executes your tests. These annotations are used to identify the different components of your tests, such as test methods, groups, and configurations. TestNG includes a wide range of annotations that can be used to control various aspects of your tests, such as the order in which they are run, the conditions under which they are executed, and how they are organized into groups. Annotations can be added to your code for different test methods; there are various types of annotations. Using annotations can make it easier to manage and organize your tests and can help you create more maintainable and reliable test code. TestNG Reporter Log in Selenium The TestNG reporter log is a feature of the TestNG testing framework that allows you to generate detailed reports about the execution of your Selenium tests. The reporter log provides information about the tests that were run, including the number of tests that passed and failed, the time taken to run each test, and any exceptions or errors that were thrown. This information can be very useful for understanding the results of your tests and identifying any potential issues or problems. The reporter log can help you get a better understanding of your Selenium tests and improve the quality of your test code. Overall, TestNG Reporter Class allows us to create reports that are informative and helpful without relying on third-party software, improving efficiency and analysis of information. TestNG Reports in Jenkins TestNG reports can be integrated with Jenkins to provide additional insights and information about the results of your Selenium tests. Jenkins is an open-source automation server that can be used to automate a wide range of tasks, including building, testing, and deploying software. By integrating TestNG with Jenkins, you can create a continuous integration (CI) pipeline that automatically runs your tests as part of your build process and generates reports about the results. To use TestNG reports in Jenkins, you need to configure the plug-in once done, Jenkins will automatically generate TestNG reports for your Selenium tests as part of your build process, providing valuable insights into the quality of your code.
If you are a web developer, open-source projects can help not only expand your practical knowledge but build solutions and services for yourself and your clients. This software provides hands-on opportunities to implement existing approaches, patterns, and software engineering techniques that can be applied to projects further down the road. Since it is vital to securely create solutions that may be easily scaled, we will consider projects that are built on ASP.NET technology. It is a framework for building innovative cloud-based web applications using .NET that can be used for development and deployment on various operating systems. Open-Source Projects on ASP.NET and .NET Four open-source projects that would let you work with various architecture and code techniques have been compiled by our team. 1. nopCommerce nopCommerce is an open-source eCommerce platform that is free and stands as the most powerful shopping cart built on ASP.NET Core in the world. Fully customizable, stable, secure, and extensible, nopCommerce provides a variety of built-in enterprise eCommerce features that may help you to develop a project of any complexity. To help you get quickly and effectively acclimated with its architecture, main design, system requirements, installation steps, and other aspects of setup, there is comprehensive documentation that covers every aspect of an online store development of any kind and size. Furthermore, the nopCommerce team has introduced a training course for developers that may give you a significant boost to start building eCommerce solutions, even enterprise-level ones, for the existing and new nopCommerce clients. Technologies Associated With nopCommerce Redis, is an in-memory data store that allows developers to store, access, and use data in applications by writing complex code with fewer and simpler lines; LINQ to DB, a Language-Integrated Query (LINQ) library for database access that provides a light and fast layer between your database and your Plain Old CLR Objects (POCO); NUnit, an open-source testing framework, which is made for all .NET languages; Moq is a user-friendly mocking framework that is built for .NET. nopCommerce’s GitHub Statistics Latest release: 4.50.3; Starred by 7,392; 5,401 closed issues; Languages: C# - 58.5%, HTML - 15.6%, JavaScript - 11.6%, TSQL - 10.1%, Less - 2.2%, CSS - 2.0%. 2. OrchardCore OrchardCore is a modular ASP.NET Core application framework and CMS, which is additionally open-source and multi-tenant. If you are a developer looking to build SaaS applications, you are likely to be more interested in the modular framework. It is important to distinguish between the framework and the CMS, as the latter is best for building administrable websites. Developers will typically use the CMS for building modules to enhance their sites. The OrchardCore documentation and its README file on GitHub might assist you in developing a web CMS by outlining the architectural decisions that were made to address the specific issue of obtaining both flexibility and a positive user experience. Technologies Associated With OrchardCore Docker a software platform that virtualizes the operating system (OS) of the computer on which it is installed and running, streamlining the process of building, running, managing, and distributing applications; Redis; SignalR is an ASP.NET software package that enables JavaScript server-side code to instantly transfer content to associated web clients. OrchardCore’s GitHub Statistics Latest release: 1.4.0; Starred by 6,040; 4,468 closed issues; Languages: C# - 51.8%, CSS - 20.6%, JavaScript - 15.7%, HTML - 9.5%, SCSS - 1.4%, Pug - 0.4%, Other - 0.6%. 3. eShopOnWeb eShopOnWeb is a sample application powered by Microsoft, but it can serve as a starting point for developers who might feel overwhelmed by the complexity of the previously mentioned projects. Layered architecture with a monolithic deployment pattern is demonstrated in this project, which focuses on container-based application architecture. There is no certain documentation file or website page. However, you may find all helpful information from the README file on its repository. It has necessary links to other useful articles and videos that may assist you. Technologies Associated With eShopOnWeb: Docker; MediatR; JWT Tokens, is an open standard that defines a compact and self-contained way for securely transmitting information between parties. eShopOnWeb’s GitHub Statistics Latest release: one and only release; Starred by 7,808; 296 closed issues; Languages: C# - 72.5%, HTML - 20%, CSS - 3.3%, SCSS - 3%, Dockerfile - 1.2% 4. Miniblog.Core Miniblog.Core is a full-featured blog software. Simple but modern, Miniblog.Core is a performance-focused ASP.NET Core engine for blogging, earning a 100/100 score on Google PageSpeed Insights on both desktop and mobile. As an open-source platform, it can be adapted to work with other .NET Core framework versions, as well. You may notice that there isn't a lot of documentation when you look at the GitHub repository that houses Miniblog. However, you can install a template so you can build it using Visual Studio by following the steps in the readme file, but be careful because this won't give you the most recent version. You can also check out its features by visiting a link to an example site created with Miniblog and published on Azure. Miniblog.Core’s GitHub Statistics Latest release: one and only release; Starred by 1,297; 43 closed issues; Languages: JavaScript - 39.4%, C# - 35.6%, HTML - 12.0%, SCSS - 8.8%, CSS - 4.2%. Start Your Own eCommerce Project What are you waiting for? When it comes to developing solutions and expanding your knowledge with practical experience, there is no time like the present. Now that you have four potential projects to choose from, it’s up to you to carve out time to dedicate to self-development. If you want to start an eCommerce career, you can start exploring the world's most popular shopping cart that is built on ASP.NET Core — nopCommerce. Download the powerful open-source eCommerce software based on .NET.
The concept of distributed applications is certainly not new. Whoever has a long IT career certainly remembers a number of different technologies implementing distributed components even in the early years. Nowadays, is all about microservices. They are a new form by which we consider today the concept of distributed computing. Their peculiarity is that their communications are based essentially on REST and messaging protocols, which have the advantage of being widely spread standards. The core concept is essentially the same, having pieces of the whole system completely independent one from the other and running each in its own process. The microservices world, coupled with the advent of cloud platforms has paved the way for a thriving of related technologies. This new architectural style has its drawbacks and a number of specific patterns are required to overcome them. Spring Cloud is a Spring project based on Spring Boot and contains specific packages that cover such patterns, both with its own solutions and integrating third-party ones as well (like Netflix OSS tools). In this article, we will show a list of the main microservices patterns and a brief overview of how Spring Cloud copes with them. The present post is meant to be a quick introduction to the framework and the first of a series of articles aimed at covering the most important features and modules of Spring Cloud. In the next article, we will cover the basics of remote configuration, which is a fundamental piece of the Spring Cloud microservice ecosystem. Monolithic Applications We can describe a monolithic application as a self-contained system, the goal is to solve a range of functionalities in a single processing unit. The following picture shows what a monolithic application could look like. The main concept is that all the application is decomposed into layers specialized in some general design logic, like business logic and database layers but all those layers run typically in the same process and communicate with each other by internal method calls (internal to the Java Virtual Machine in which they run). Microservice Applications Microservice applications have a more complex structure. We can think of a microservice system as an evolution of a monolithic one where its main features are separated into independent applications, each running in its own process, and possibly internally decomposed into layers like in the monolithic schema depicted above. The following picture shows a rough example of what a microservice system could look like. It’s an oversimplified schema, but it serves the purpose of having a general understanding. In the picture, we have a gateway, which represents the entrance of the external world into the system, and some microservices, each in a separate node (hardware or virtual machine). In the picture above, each microservice uses its own database instance running in a separate node as well. In reality, the way we deploy the single services does not follow rigid rules, we could have a single shared node to host the database or even a single node to host the three services each running in a separate process (we don’t talk here about containers just to keep things simple and generic). Besides the specific deploy schema, the main difference compared to the monolithic scenario is that we have a number of features running in their own process and these are connected typically with REST or messaging protocols, that is to say, they communicate by remote calls and they are namely “distributed” components. This “distributed” nature allows us to develop each piece of the system independently. This way we can enhance the reuse logic: it is simpler to devise clean and robust designs in such specialized components and the single pieces can be developed by completely independent teams. Unfortunately, this also comes with a number of drawbacks. How do we coordinate these distributed components? How do we deal with configuration in a centralized and consistent way? To exploit this new software paradigm, we need technologies to cope with its main drawbacks. The advent of Cloud technologies has offered an enhanced and effective way of treating these concerns. This does not mean that microservices are a solution for each and every problem. Sometimes a monolithic solution would be the more natural choice. We can say that microservices could be an excellent choice for large and complex systems, but lose some of their appeal for simpler ones. Spring Cloud, Microservices, and Netflix OSS Spring Cloud sits on the Spring Boot framework. It offers a number of solutions on its own and integrates with external tools to cope with the main microservices architectural concerns. Netflix OSS is a series of software solutions that cover the main microservices patterns. Spring Cloud packages offer a layer of integration towards those solutions: it will be enough to use the related Spring Boot starter dependencies and some specific configurations. Setting Dependencies as Release Trains To simplify dependency management the concept of release train has been introduced. Spring Cloud is a collection of modules, each specialized on some specific feature, and each of them is developed independently. A release train identifies a set of modules releases that are verified as fully compatible with each other. Once we have chosen the Spring Boot version to use, we have to pick a Spring Cloud release train that is compatible with that version, and set it in a Maven dependency management section: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2021.0.5</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Then we can set the specific module’s dependencies by Spring Cloud starters in the “dependencies section.” The dependency management section serves the only purpose of specifying the whole set of modules and related versions that we want to use. This way, we don’t have to specify any version for the individual module in the dependency section. This Maven feature is called BOM (Bill of Materials). Following the above dependency management settings, if we want our application to use the features of the Spring Cloud Config module, we can simply add a piece of configuration like this: XML <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> </dependencies> To avoid searching for a compatibility matrix in the Spring documentation in setting up an application from scratch, a practical way would be to use the start.spring.io site. There you can first select the Spring Boot version and then the wanted Spring Cloud modules. Spring Cloud and Microservice Patterns We list here the main patterns involved in microservice architecture and what the Spring Cloud package offers for them: Distributed/versioned configuration Service registration and discovery Routing Service-to-service calls Load balancing Circuit breakers Distributed messaging Distributed Configuration An important concern with microservice architectures is how to deal with configuration. Since we have a number of services each running in its own process, we could simply think that each service is responsible for its own configuration. From the standpoint of system administration, this would be a nightmare. Spring Cloud provides its own solution to provide a centralized configuration feature and overcome this problem, named Spring Cloud Config. Spring Cloud Config uses Git as the first choice for a backend storing facility (an alternative would be Vault from Hashicorp). Two alternatives of Spring Cloud Config are Consul by Hashicorp and Apache ZooKeeper (both with features not strictly limited to distributed configuration). Service Registration and Discovery One of the main characteristics of microservices architecture is service registration and discovery. Each service registers itself to a central server (possibly distributed on more than one node for high availability) and uses that central server to find other services to communicate with. Spring Cloud provides integration with the Netflix OSS tool Eureka . Eureka has both a client and a server package. The client is provided by spring-cloud-starter-eureka and the server by a spring-cloud-starter-eureka-server dependency. A service that implements the client side will use the server to register itself and at the same time find other already registered services. Distributed Logging and Tracing Logging and tracing in microservices applications are not trivial tasks. We need to collect the logging activity in a centralized way and at the same time offer some advanced way of tracing service interactions across the whole system. Spring Cloud Sleuth library is an available choice to solve these problems. Sleuth’s more important feature is to associate all the secondary requests across microservices to the single input request that triggered them. So it collects all the logging involved by some identification information and at the same time provides a complete picture of the interactions involved in each request, including timing information. Such information can then be exported to another tool named Zipkin, which is specialized in analyzing latency problems in a microservices architecture. Routing To establish communication between the external world and our microservices system we need to route the incoming request to the right services. A Netflix OSS solution is Zuul, which can be used inside a Spring application through the spring-cloud-starter-zuul starter dependency and related configuration. Zuul can play the role of an API gateway to the system and also work as a server-side load balancer. Service-To-Service calls The main form of communication between services is the REST protocol. Spring offers RestTemplate as a synchronous client to perform REST calls (a recent asynchronous alternative is WebClient). Spring Cloud also supports Netflix Feign as a REST-based client by the spring-cloud-starter-feign starter dependency. Feign uses specific annotations that allow defining interfaces that will be implemented by the framework itself. Load Balancing Load balancing is another important feature that microservices systems must implement. Different rules could be used, like a simple round robin, skipping servers that are overloaded, or using an algorithm based on the average response time. Spring Cloud can support load balancing by integrating Ribbon, a library from Netflix OSS. Circuit Breakers A common scenario in a microservices system is the possibility that some service failure would affect the other services and the whole system. Circuit Breaker is a pattern that tries to solve this problem by defining a failure threshold after which the service interrupts immediately its execution and returns some form of predefined result. Hystrix is a library of Netflix OSS that implements this pattern. To include Hystrix in a Spring Cloud project the spring-cloud-starter-hystrix Spring Boot starter must be used. Distributed Messaging Besides the classic REST-style communications between services, we also have the option of using messaging architectural patterns. We can base our whole architecture, or just a part of it, on publish/subscribe or event-driven point-to-point messaging. Spring Cloud Stream package allows us to implement message-driven microservices and integrate the more popular message brokers like RabbitMQ and Apache Kafka. Conclusion Microservices architectures require a number of patterns to overcome their inherent complexity. Spring Cloud provides solutions to these patterns with its own implementations and integration with third-party tools as well. In this article, we have made a quick overview of the main modules of Spring Cloud. In the next article, we cover the basics of the Spring Cloud Config module. We have also covered some important features of Spring Boot, which is the base platform of Spring Cloud, in the following articles: Spring Boot for Cloud: Configuration and Dependencies Spring Boot for Cloud: REST API Development Spring Boot for Cloud: Actuator
AngularJS is a very powerful JavaScript framework. Many organizations use this framework to build their front-end single-page applications easily and quickly. With AngularJS, you can create reusable code to reduce code duplication and easily add new features. According to the stats, 4,126,784 websites are AngularJS customers. As AngularJS has gained popularity in the web development community, its testing frameworks have had to grow along with it. Currently, the most popular framework for unit testing Angular applications is Jasmine, and several others are gaining traction. In this article on AngularJS testing, let’s understand the difference between Angular and AngularJS, top features, benefits, testing methodologies, and components. However, before we get into details, let us first understand the basics of AngularJS. What Is AngularJS? AngularJS is a robust JavaScript framework for building complex, single-page web applications. It’s based on the MVC (Model-View-Controller) design pattern, and it uses dependency injection to make your code more modular and testable. AngularJS emphasizes cleanliness and readability; its syntax is lightweight, consistent, and simple to read. The framework also allows you to separate presentation from business logic easily — it’s ideal for small and large projects with complex client requirements. AngularJS has been used in production by several large companies, such as Google and Microsoft, as well as other organizations like NASA. It was created by Google employee Misko Hevery, who still maintains the development of the framework. And it’s open-source software released under a BSD license, so it’s free to use commercially. There are different versions of AngularJS available in the market. The first Angular version 1.0 was released in 2010 by Google. Angular 2.0 was released in September 2016. Angular 4.0 and 5.0 versions were released in March 2017 and November 2017, respectively. Google provides all the necessary support to this framework, and with a broad developer community, the features and functionalities are always up to date. Let’s now understand the importance of AngularJS. Why AngularJS? The following are the main justifications for choosing AngularJS as your go-to framework: AngularJS allows you to work with components, and hence these components can be reused, which saves time and unnecessary effort spent in coding. It is a great framework that allows you to create Rich Internet Applications. It allows developers to write client-side applications using JavaScript in a Model View Controller (MVC) architecture. It is an open-source, free framework, meaning an active developer community is contributing to it. Top Features for AngularJS AngularJS is a JavaScript framework that has quickly gained popularity because of its powerful features. The framework is being majorly used for building client-side web applications. It is designed to make the development process easier, faster, and more efficient. The framework accomplishes this by providing two-way data binding, dependency injection, a modular architecture, and much more. Let’s look at some of the top features of AngularJS: Model View Controller (MVC) Architecture MVC is a popular architecture with three main components: Model: Used to manage the application data requirements. View: Used for displaying the required application data. Controller: Helps to connect the model and the view component. It is about splitting your application into three components and performing the coding requirements. This is done in AngularJS, where we can effectively manage our coding with less time and effort. Data Model Binding There is a complete synchronization of the model and view layers. This means that any change in data for the model layer automatically brings the changes in the view layer and vice versa. This immediate action automatically ensures the model and view are updated every time. Support for Templates The main advantage of using AngularJS is the use of template support. You can use these templates and use them effectively for your coding requirements.Apart from the above great features, there is a predefined testing framework called Karma that helps to create unit tests using AngularJS applications, which is unique. Limitations of Using AngularJS AngularJS contains many features that make it a powerful tool. However, this tool has limitations that developers should be aware of when deciding to use it, including: Finding the right set of developers to understand this complicated framework becomes challenging. There are security issues since it is a JavaScript-only framework. You have to rely on server-side authentication and authorization for securing your application. Once the user disables the executed JavaScript, nothing will be visible except the basic details. Components of AngularJS Applications Building a single-page web app with AngularJS can be as simple as linking to the JavaScript file and adding the ng-app directive to the HTML. However, this setup is only suitable for small applications. When your AngularJS app starts to grow, it’s essential to organize it into components. The component pattern is a well-established way to solve this problem in the object-oriented world. AngularJS refers to them as directives and follows the same basic principle of isolating behavior from markup. An AngularJS application consists of three main components: ng-app. ng-model. ng-bind. We will discuss how all of these three components help to create AngularJS applications. ng-app: This directive allows you to define and link an AngularJS application to HTML. ng-model: This directive binds the values of AngularJS application data to corresponding HTML controls. ng-bind: This directive binds the AngularJS Application data to HTML tags. Differences Between Angular and AngularJS AngularJS and Angular are two different frameworks, with the former being a complete and powerful JavaScript framework for building dynamic web apps, while the latter is an open-source library that adds features to the original AngularJS. AngularJS is a full-featured framework for building dynamic, single-page applications. Angular was built based on the design principles of AngularJS but is not simply an upgrade or update. As it is a different framework, it has some significant differences from AngularJS. The most basic difference between the two is that Angular is based on TypeScript, a superset of JavaScript that adds static typing and class-based object-oriented programming to an otherwise standard JavaScript language. ANGULAR ANGULARJS Angular uses components and directives. AngularJS supports MVC architecture. Angular is written in Microsoft’s TypeScript language. AngularJS is written in JavaScript. Angular is supported on popular mobile browsers. AngularJS does not support mobile browsers. It is easier to maintain and manage large applications in Angular. Difficult to maintain and manage large applications in AngularJS. Angular comes with support for the Angular CLI tool. It doesn’t have a CLI tool. Prerequisites Before Learning AngularJS There are some prerequisites that need to be followed before you start implementing or even testing the AngularJS applications. Some of them include: Knowledge of HTML, CSS, and JavaScript. JavaScript functions and error handling. Basic understanding of Document Object Model (DOM). Concepts related to Model View Controller (MVC). Basic knowledge of libraries. Angular CLI understanding and implementation. Creating AngularJS Applications Follow the steps below to create and execute the AngularJS application in a web browser: Step 1: Load the required framework using the < Script > tag. You can execute the following code in the script tag. First, enter the required source details in the src. <script> src=”https://angularjs/angle.js” </script> Step 2: Define the AngularJS application using the ng-app directive. You can execute the following code: <div ng-app = “”> ........ </div> Step 3: Define a model name using the ng-model directive. You can execute the following code: <p> Enter your Required Name: <input type = “text” ng-model = “name”></p> Step 4: Bind the above model requirements using the ng-bind directive. <p> Hello <span ng-bind = “name”></span>!</p> Step 5: You can execute the above steps on an HTML page, and the required changes are executed or validated in the web browser. Testing AngularJS Applications Using Different Methodologies AngularJS is a modern web application framework that promotes cleaner, more expressive syntax for all types of applications. With its reliance on dependency injection and convention over configuration, it can make writing applications more efficient and consistent. However, AngularJS applications must be tested to ensure they function properly. Most AngularJS developers know that the framework is based on an MVC pattern and that there are many different approaches to testing its applications. With many frameworks and libraries today, getting lost in the sea of choices is easy. In this section of this article on AngularJS testing, we’ll take a look at three different frameworks for testing AngularJS applications: Jasmine, Karma, and Protractor. Jasmine Jasmine is one of the most popular unit-testing frameworks for JavaScript. It has a strict syntax and a BDD/TDD flavor making it a great fit for AngularJS testing. Karma Karma is a JS runner created by the AngularJS team itself, and it is one of the best in AngularJS testing. Jasmine is a framework that allows you to test AngularJS code, while Karma provides various methods that make it easier to call Jasmine tests. For installing Karma, you need to install node JS on your machine. Once Node.js is installed, you can install Karma using the npm installer. Protractor Protractor is an end-to-end testing framework for AngularJS applications. It is a Node.js program built on top of WebDriverJS. Protractor runs tests against the application running in a real browser. You can use this framework for functional testing, but you are still required to write unit and integration tests. Cypress It is a JavaScript E2E testing framework used for AngularJS testing. Cypress provides various bundled packages such as Mocha, Chai, and Sinon. However, the only supportive language with Cypress is JavaScript. How to Perform AngularJS Testing? Unit testing has become a standard practice in most software companies. Before rolling out features and improvements for end-user use, testing the coding requirements before the code is released on the production server is crucial. The following aspects are covered during the testing phase: Validation of product requirements that are developed. Validation of test cases and test scenarios by the testing teams. AngularJS testing can be performed in two ways: Manual testing Automation testing Manual testing is all about executing different test cases manually, which takes considerable time and effort. This is performed by a team of manual testers where the required test cases are reviewed and validated for features and enhancements planned in every sprint. Automation testing is a far more effective and quicker way of executing the testing requirements. This can be performed using an automation testing tool that helps automate the testing approach being followed. Many organizations have shifted their focus from manual to automation testing, as this is where the actual value lies. Gone are those days of traditional testing when a large amount of time was spent setting up the testing environment and finalizing the infrastructure requirements. Cross-browser testing is essential when running a web application on different supported browsers. This important technique allows you to validate the web application functionality and other dependencies. Summary We discussed how AngularJS is a prominent open-source framework if you are trying to build single-page web applications. There are different testing methodologies that you can adopt for AngularJS testing to make sure exceptional outcomes are achieved in the long run. There is always a crucial role played by cross-browser testing platforms when testing your requirements on different supported platforms and devices.
PHP (Hypertext Preprocessor) is a programming language that is primarily used for web development. It is a server-side language, which means that it is executed on the server rather than in the user’s web browser. PHP is often used in combination with HTML, CSS, and JavaScript to create dynamic and interactive websites. One of the main advantages of PHP is that it is easy to learn and use, making it a popular choice for web developers of all skill levels. PHP also has a large and active developer community. This community provides a wealth of resources for those who want to learn more about the language or get help with specific issues. In PHP, there are several frameworks available that make it easy to create REST APIs, including Laravel, Slim, and Lumen. These frameworks provide a range of features and libraries to help developers create APIs quickly and efficiently, including support for routing, request and response handling, and data validation. So whether you are building an API for a small project or a large application, there is likely a PHP web development framework that can meet your needs. When choosing an API framework for PHP, there are a few key factors to consider: Performance: If your API will be handling a large number of requests or processing a lot of data, performance is an important factor to consider. Look for a web framework that is optimized for speed and efficiency. Ease of use: Consider the complexity of the framework and whether it is easy to learn and use. This is especially important if you are new to PHP or website development. Feature-rich and libraries: Think about the features and libraries that you will need for your API and whether the framework you are considering has built-in support for these. You may also want to consider the extensibility of the framework in case you need to add custom functionality. Community and resources: Look for a modern framework with a large and active community of developers and a wide range of resources available, including documentation, tutorials, and packages. This can make it easier to get help and find solutions to problems. Compatibility: Consider whether the framework is compatible with the version of PHP you are using and the hosting environment you will be deploying to. It is also a good idea to try out a few different frameworks to see which one is the best PHP framework for your needs and preferences as a web developer. The Frameworks 1. Laravel 5 A web application framework with expressive, elegant syntax, Laravel strives to make development an enjoyable, creative experience. The goal of Laravel is to make development easier by easing common tasks used in most web projects; key features such as: Simple, fast routing engine. Powerful dependency injection container. Multiple back-ends for session and cache storage. Database agnostic schema migrations. Robust background job processing. Real-time event broadcasting. Pros of using Laravel: Ease of use: Laravel is known for its expressive syntax and intuitive design, which makes it easy for developers to get started and build applications quickly. Large community and resources: Laravel framework has a large and active community of developers and a wide range of resources available, including documentation, tutorials, and packages. This makes it easier for developers to get help and find solutions to problems. Built-in support for common tasks: Laravel has built-in support for common tasks such as URL routing, database access, and form validation, which can save time and effort for developers. Cons of using Laravel: Complexity: While Laravel is relatively easy to learn and use, it can be more complex than some other frameworks because it offers a wide range of features and libraries. This may make it more challenging for developers who are new to PHP or web development. Performance: While Laravel is generally fast and efficient, it may not be the best choice for applications that need to handle a huge number of requests or process a large amount of data. In these cases, a lighter framework or microframework may be a better option. Compatibility issues: Laravel is a relatively new framework, so it may not be compatible with older versions of PHP or certain hosting environments. This can limit its use for certain projects or teams. To learn more about the Laravel framework, you can check out the docs here. 2. Guzzle Sending HTTP requests and creating web service clients is made easy with Guzzle. In addition to service descriptions, resource iterators allow you to traverse paginated resources efficiently; batching allows you to send a large number of requests in a timely manner. It’s a framework that includes everything you need to build a robust web service client. Pros of using Guzzle: HTTP request support: Guzzle is specifically designed to make it easy to send HTTP requests and handle responses, so it is a good choice for projects that need to interact with web services or APIs. Flexibility: Guzzle is a standalone library that can be used in any PHP project, whether it is a full-stack framework or a custom application. This allows developers to use it in a variety of projects and environments. Extensibility: Guzzle is designed to be modular and extensible, allowing developers to add custom functionality or extend existing features. Documentation: Guzzle has comprehensive documentation and a strong community of users, making it easier for developers to get started and find answers to questions or problems. Cons of using Guzzle: Limited features: Because Guzzle is a standalone library, it does not include many of the features and libraries found in full-stack frameworks like routing, database access, and form validation. Developers may have to use other libraries or write their own code to implement these features. Complexity: While Guzzle is relatively easy to learn and use, it can be more complex than some other HTTP client libraries because it offers a wide range of options and features. This may make it more challenging for developers who are new to HTTP or web development. Compatibility issues: Guzzle is a standalone library, so it may not be compatible with certain hosting environments or older versions of PHP. This can limit its use for certain projects or teams. To learn more about the Guzzle framework, you can check out the docs here. 3. Leaf PHP The Leaf MVC framework is a simple PHP framework for creating powerful web apps and APIs. Leaf MVC is powered by the Leaf Core and based on the Ruby on Rails and Laravel frameworks. Pros of using Leaf: Lightweight framework and easy to learn: Leaf is a minimalistic framework, which makes it easy to learn and use for developers who are new to PHP or web development. Performance: Leaf is designed to be fast and efficient, making it a good choice for applications that need to handle a large number of requests or process a lot of data. MVC architecture: Leaf follows the Model View Controller (MVC) architecture, which helps to separate the presentation layer from the business logic and data management. This can make it easier to develop and maintain larger applications. Built-in support for common tasks: Leaf comes with built-in support for common tasks such as routing, database access, and form validation, which can save time and effort for developers. Cons of using Leaf: Limited community support: Leaf is a relatively new and lesser-known framework, which means that there may be less community support and resources available compared to more established frameworks like Laravel or CodeIgniter. Limited features: Because Leaf is a minimalistic framework, it may not have as many built-in features and libraries as other frameworks. This means developers may have to rely on external libraries or write their own code to implement certain features. Limited documentation: Leaf has limited documentation compared to other frameworks, which can make it more challenging for developers to get started and find answers to questions or problems. Compatibility issues: Because Leaf is a relatively new framework, it may not be compatible with older versions of PHP or certain hosting environments. This can limit its use for certain projects or teams. To learn more about the Leaf framework, you can check out the docs here. 4. Slim The Slim PHP micro-framework allows you to develop APIs and web applications quickly and easily. At its core, Slim is a microframework designed to receive HTTP requests, route the requests to the relevant controllers, and return the corresponding HTTP responses. This simplicity makes Slim easy to learn and performant. Pros of using Slim: Lightweight and easy to learn: Slim is a minimalistic framework, which makes it easy to learn and use for developers who are new to PHP or web development. Performance: Slim framework is designed to be fast and efficient, making it a good choice for applications that need to handle a large number of requests or process a lot of data. Extensibility: Slim framework is designed to be modular and extensible, allowing developers to add custom functionality or extend existing features. Cons of using Slim: Limited features: Because Slim is a minimalistic framework, it may not have as many built-in features and libraries as other frameworks. This means that developers may have to rely on external libraries or write their own code to implement certain features. Limited documentation: Slim has limited documentation compared to other frameworks, which can make it more challenging for developers to get started and find answers to questions or problems. Compatibility issues: Because Slim is a relatively new framework, it may not be compatible with older versions of PHP or certain hosting environments. This can limit its use for certain projects or teams. To learn more about the Slim framework, you can check out the docs here. 5. Lumen Laravel Lumen is a stunningly fast PHP micro-framework for building web applications with expressive, elegant syntax. By easing common tasks that are frequently encountered in most web projects, such as routing, database abstraction, queueing, and caching, Lumen aims to make development easier. Pros of using Lumen: Performance: Lumen is designed to be fast and efficient, making it a good choice for building simple rest API development and microservices that need to handle a large number of requests or process a lot of data. Compatibility with Laravel: Because Lumen is based on Laravel, developers who are familiar with Laravel will find it easy to learn and use. This also means that Lumen can benefit from the large community of Laravel developers and resources. Extensibility: Lumen is designed to be modular and extensible, allowing developers to add custom functionality or extend existing features. Cons of using Lumen: Limited features: Because Lumen is a minimalistic framework, it may not have as many built-in features and libraries as other frameworks. This means that developers may have to rely on external libraries or write their own code to implement certain features. Limited documentation: Lumen has limited documentation compared to other frameworks, which can make it more challenging for developers to get started and find answers to questions or problems. Compatibility issues: Lumen is a relatively new framework, so it may not be compatible with older versions of PHP or certain hosting environments. This can limit its use for certain web app projects. To learn more about the Lumen framework, you can check out the docs here. Adding in API Analytics and Monetization Building an API is only the start. Once your API endpoint is built, you’ll want to ensure that you monitor and analyze incoming traffic in addition to your API testing tool. Doing this lets you identify potential issues and security flaws and determine how your API design is used. These can all be crucial aspects in growing and supporting your APIs. As your API platform grows, you may be focused on API products. This is making the shift from simply building APIs into the domain of using the API as a business tool. Much like a more formal product, an API product needs to be managed and likely will be monetized. Building revenue from your APIs can be a great way to expand your business’s bottom line. With an API analytics tool, you can achieve all of the above. These tools can easily integrate through either an SDK or plugin and be up and running in minutes. Once an API analytics tool is integrated with your APIs, you’ll be able to explore charting and reporting to look at the following: Live API traffic, Time-series reports inspecting usage, Conversion funnels, Retention reports, And much more… Wrapping Up In this article, we covered five of the popular Golang framework for developing RESTful APIs with the Go programming language. We looked at a high-level overview of each and listed out some points for consideration. We also discussed some key factors in deciding which Go web application framework to use.
This is an article from DZone's 2022 Enterprise AI Trend Report.For more: Read the Report The Knowledge Graph: What It Is, The Rise, and The Purpose A knowledge graph (KG) is a semantic network of an organization, a topic where the nodes are known as the entities and the edges are the relationships. It is a framework comprising a set of related yet heterogeneous data — like image, sound, text, video, numbers, etc. — that gives a semantic interpretation and lets researchers run complex algorithms on graph data to generate insight. The RDF (Resource Description Framework) triplestore, a graph database, stores the data as a network of objects or RDF triples that segregate the information into subject-predicate-object expressions. A simple example of relations among entities is shared in Figure 1 for ease of understanding. Figure 1: Relational knowledge graph Mathematician Leonhard Euler, the father of graph theory, used graphs to calculate the minimum distance the emperor of Prussia had to travel to visit Königsberg. With the revolution of big data, organizations started looking beyond traditional relational databases like RDBMS. The NoSQL movement lets organizations store both structured and unstructured data in data lakes. Different types of databases, like MongoDB for documents and Neo4j for graph databases, came into existence with capabilities of graph storage and processing. However, they were not free from problems as there was a lack of formal data schemas and consistencies to run the complex analytics models. KGs bridged the gap and instantly became the cynosure of all large organizations. There exists a three-fold goal of KGs. First and foremost, a KG helps users by searching to discover information more quickly and easily. Secondly, a KG provides side and contextual information in developing an intelligent recommendation engine. Finally, it can help answer queries and make predictions through Knowledge Graph Question Answering (KGQA). The key basis for generating the answers from the questions are shared below in Table 1. Description Semantic parsing Parses the natural language question SPARQL is used to search the KG Information retrieval Natural language questions are transformed into structured queries to find possible answers Feature and topic graphs are used to retrieve the best answer Embedding Calculates proximity scores between questions and plausible answers Uses the vector modeling approach Deep learning (DL) DL on NLP is applied, like multi-column convolutional neural networks (MCCNNs), for image analysis Bidirectional long short-term memory (BiLSTM) is used to understand the questions better Table 1: Basis of KGQA Developing the Knowledge Graph Automated knowledge acquisition and semantic mapping are the pillars of developing a KG. The process of ontology engineering for knowledge acquisition starts with ontology learning that aims to automatically learn relevant concepts and establish relations among them. To achieve this, first the corpus is parsed to identify the collocations and, subsequently, it retrieves the semantic graph. Entity enrichment takes place by crawling semantic data and merging new concepts from relevant ontologies. Figure 2: Process of entity enrichment Integrating heterogeneous data from structured sources demands mapping the local schemas to the global schema. Global-as-view (GAV), a mediation-based data integration strategy, is implemented where the global schema acts as a view over source schema to convert the global query into a source-specific query. Detecting the semantic type is the first step for automated semantic mapping, which is followed by inferring the semantic relation. Data are initially modeled using RDF and subsequently RDF Schema (RDFS), and Web Ontology Language (OWL) adds semantics to the schema. Semantic information can also be mapped in a hierarchical way through relational vectors. Graph neural networks (GNNs) — like graph convolutional networks (GCNs) or gated graph neural networks — are used for object detection and image classification of graph data. Enterprise Knowledge Graph Organizations of today’s era are in pursuit of discovering the hidden nuggets of information, so they are interlocking all their siloed data by consolidating, standardizing, and reconciling. Thus, an enterprise knowledge graph provides an explicit representation of knowledge from business data in the graph. An integrated data enterprise possesses the power of the web of knowledge that uncovers critical hidden patterns to monetize their data. Figure 3: Steps to develop an enterprise knowledge graph Real-World Knowledge Graphs We are inundated with data in the present world. KGs give meaning and purpose to connected data with several applications, of which a few are shared below. Financial Services Knowledge Graph KGs have wide applications in financial services, ranging from fraud detection and tax calculations to financial reporting and stock price prediction. Fraud rings comprising a few people collectively committing a fraud can easily be identified by examining the topology of the subgraphs. Figure 4: Fraud detection through a knowledge graph Stock price can be predicted by linking the sentiments associated with the news of the respective company. Hedge funds and banks use KGs for better predictions by mapping their existing models with the alternate data provided by KGs. Medical Science Biomedical concepts and relationships are represented in the form of nodes and edges. By applying KGs, medical imaging analysis can be used for disease classification, disease medication and segmentation, report generation, and image retrieval. Textual medical knowledge (TMK) from the Unified Medical Language System (UMLS) is analyzed to generate key medical insights and personalized patient reports. Real-Time Supply Chain Management Supply chain organizations use KGs to optimize the stock of inventories, replenishment, network and distribution management, and transportation management. The connected supply chain KG takes the inputs from the manufacturing KG of production, including personnel, plus the retail KG, which comprises the real-time and forecasted demand for better prediction and management (Figure 5). Figure 5: Constituent components to develop a supply chain knowledge graph Conclusion A knowledge graph has the power to create a virtual world where all entities are connected with a proven relationship. Sophisticated machine learning algorithms are applied to prune those connections where the probability of a relationship is slim. Thus, proven relationships among all objects in the world can be established through a KG. With all the past and present data, a KG produces deep insights by recognizing the patterns. A KG also helps us predict the future with all the relevant data leading to a phenomenon. The future KGs could be even more powerful with the road ahead shared below: Graph of Things (GoT) – GoT is an innovative project that aims to merge both the high-volume streaming data of the Internet of Things (IoT) and the static data of the past. Quantum AI for KG – Quantum AI can leverage the power of quantum computing for running the GNNs on the KG and can achieve the results beyond the capabilities of traditional computing. A world with all the information connected through a KG would indeed be magnificent if the benefits are harnessed for the welfare of society. AI on top of KG, when used with the right intent, will make the world a better place to live. This is an article from DZone's 2022 Enterprise AI Trend Report.For more: Read the Report
Part 1: Let’s Start With the Domain and the First Visual Component of Our Wordle App By now, you have probably heard of Wordle, an app that gained popularity in late 2021 and continues to attract thousands of users to this day. In order to unravel how it works and learn about Jetpack Compose, the new Android library for creating user interfaces, we are going to create a replica of this well-known app. We are going to start the design of our domain with the most basic part of it; we are going to model how we want to represent the letters. Since the initial state of our game will be a 6×5 board (and this board will be empty initially and filled little by little), we can represent these cells as a sealed class such as: sealed class WordleLetter(open val letter: String) { object EmptyWordleLetter : WordleLetter("") data class FilledWordleLetter(override val letter: String) : WordleLetter(letter) We can also add a validation to the FilledWordleLetter entity since, for convenience, we are representing the letter attribute as a String. We are going to look for it to have one and only one letter, for which we can add this check in the constructor and throw an exception in case it is not fulfilled. if (letter.count() != 1) { throw IllegalArgumentException("A WordleLetter can have one letter at most") } In addition, we also need to represent the state of each letter on our board. For this, we will use an enum class such as: enum class LetterStatus { EMPTY, NOT_CHECKED, NOT_INCLUDED, INCLUDED, MATCH } Later, we will also add the colors in which we will paint each cell, corresponding to each of its possible states. Now we have a basic representation of our letters and their possible states, we can start building the different entities that will represent each component of our board, starting once again with the letters. For this, we can create an entity that represents a letter together with its state, such as: data class BoardLetter( val letter: WordleLetter, val state: LetterStatus ) Each one of the rows of the board will be formed by a List<BoardLetter> we can call BoardRow, and the complete board will be formed by a List<BoardRow>. We will build these entities later, but for now, it is enough for us to know that this will be their representation. If we pay attention to this implementation we can see that actually, the board is an array of List<List<BoardLetter>>, but since we need to add functionality to each component of this array, I have preferred to divide it into concrete classes to make the implementation easier and clearer. But let’s not get too far ahead of ourselves yet; for now, we have the representation of a letter with its state on the board, so let’s start adding functionality to this class. The first thing we want to be able to do with our BoardLetter is be able to write a letter, but how can we do that if all the members of our entity are immutable? Easy! For it, we have used a data class that provides us with the method .copy through which instead of mutating our entity, we will be creating a new instance of the same one but with the modifications that we have specified. In addition, just as we want to add letters, we will want to remove them, and we will do exactly the same as with the creation, using the .copy method that allows us to maintain the immutability of our entity. fun setLetter(aLetter: String) = copy( letter = WordleLetter.FilledWordleLetter(aLetter), state = LetterStatus.NOT_CHECKED) fun deleteLetter() = copy( letter = WordleLetter.EmptyWordleLetter, state = LetterStatus.EMPTY) Finally, we will also add a convenience method to be able to create empty letters from which to start working. We will create this method inside a companion object to be able to invoke it without the need of having an instance of the class. fun empty() = BoardLetter(WordleLetter.EmptyWordleLetter, LetterStatus.EMPTY) Great! We already have our entity that represents a letter in our game, as well as a first approximation of the functionality we will need throughout our development. We cannot forget to write the tests for this class. I will not go into detail since they are trivial for this implementation, but they can be consulted here. Now that we have the implementation of our domain ready, we can create the Jetpack Compose representation of it. For it, we are going to create a Composable called LetterBox which will receive as a parameter the letter that we want to paint. @Composable fun LetterBox( letter: WordleLetter, state: LetterStatus ) We want this component to show the letter in question that the user has written, and we also want its background to be painted in a different color depending on the state of the letter. The simplest way to replicate this behavior would be to add the background directly to a Composable Text. However, to make it look a little more elegant, we will use a Card such that our component will look like this: @Composable fun LetterBox( letter: WordleLetter, state: LetterStatus ){ Card( shape = RoundedCornerShape(16.dp), colors = CardDefaults.cardColors(containerColor = calculateState(state)), elevation = CardDefaults.cardElevation(defaultElevation = 4.dp), modifier = Modifier.aspectRatio(1f) ) { Text( modifier = Modifier .fillMaxSize() .wrapContentHeight(), text = letter.letter, textAlign = TextAlign.Center ) } } private fun mapToBackgroundColor(state: LetterStatus) = when (state) { EMPTY, NOT_CHECKED -> Color.White NOT_INCLUDED -> Color.LightGray INCLUDED -> Color.Yellow MATCH -> Color.Green } We will take advantage of this component to map the different states of each box to a different color, following the rules of the game. Once we have created this component, we can visualize it thanks to the @Preview of Compose. @Preview @Composable fun Preview() { LetterBox( letter = WordleLetter.FilledWordleLetter("A"), state = INCLUDED ) } So much for this first installment on creating something similar to the Wordle app with Jetpack Compose. In future articles, we will create each of the rows of our board that will be composed of the components we created here, and we will finally create the complete game board, along with a dictionary to load the words we will use and all the logic related to the game. The complete code for the entire application can be found at this link. Until next time!
Node.js has seen meteoric growth in recent years, making it one of the most popular programming languages on the web. By combining JavaScript on the front end with Node.js for backend development, JS developers can create powerful and scalable apps that offer benefits not found elsewhere. How to Pick an API Framework If you’re a Node.js developer looking to create a REST API with Node.js, there are many different JavaScript frameworks you can choose from. With so many options available, it can be difficult to know which one is right for your app development. In this article, we’ll go over some of the top 5 Node.js REST API frameworks and help you decide which one is best for your application programming interface (API) development. When choosing a Node.js REST API framework, there are a few things to keep in mind. First, consider what kind of functionality you need from your API development. Do you need a simple CRUD API or something more complex? Second, think about how much control you want over the structure of your API. Some Node.js frameworks provide more flexibility than others. Finally, take into account the size and scope of your application. Some frameworks are better suited for large web apps while others work better for small ones. Ease of use: How easy is the framework to use? Is it well-documented? Performance: How fast is the framework? Does it scale well? Features: What features does the framework offer? Does it support everything you need? Community: Is there a large and active web developer community around the framework? With all that in mind, let’s take a look at some of the top Node.js REST API frameworks: Express The Express framework is a popular Node.js framework for building web app and mobile applications. It’s most commonly used as a router to create a single page application, multi-page, and hybrid applications. Express.js is built on top of Node.js and provides an all-in-one package for managing servers, routes, and more. Pros Links to databases like MySQL, MongoDB, etc Use Middleware for request handling Asynchronous Express provides dynamic rendering of HTML Pages, allocated by passing the arguments to the template Open source framework Cons Issues with Callbacks Errors are challenging to understand Inability to process CPUs with the capacity for tasks that require large amounts of processing power To learn more about Express framework, you can check out the docs here. FeatherJS FeathersJS is a JavaScript framework used for highly-responsive real-time apps. It simplifies JavaScript development while still being advanced. FeathersJS enables JS developers to control data through RESTful resources, meaning they don’t need external data stores or databases. Node.js Developers can also create REST APIs with Feather commands, so it’s easier to enable your web app to communicate with third-party applications and services like Twilio or Stripe. You can also integrate FeathersJS into various JavaScript frameworks. Pros Real-time API support Good Documentation for the development process Supports both JavaScript and Typescript programming language CLI scaffolding tool Supports both Relational and Non-Relational Databases Cons It uses PassportJS that which does not provide SAML authentication out of the box Larger-scale real time application in FeathersJS could cause a WebSockets issue To learn more about Feather.Js framework, you can check out the docs here. LoopBack LoopBack is a Node.js framework that can be used by JS developers and businesses to build on top of the service with TypeScript packages. It offers multiple advantages for application development, including the following: Health Checks for monitoring Metrics for collecting data about system performance Distributed Tracing for tracing issues across microservices Logging so you can gather insights about what’s going on within your applications Built-in Docker files so you can quickly build new projects without having to worry about any of the infrastructure All this combined makes LoopBack one of the few Node.js frameworks that support proprietary databases like Oracle, Microsoft SQL, IBM DB2, etc. It also provides an easy bridge between SOAP services, making it one of only a handful of Node.js frameworks providing integration with SOAP services. Pros Code is modular and structured Good ORM with available connectors Built-in user & access role feature Built in API Explorer via Swagger Cons Monolithic architecture Opinionated architecture Not as much community support Steep learning curve To learn more about LoopBack framework, you can check out the docs here. Nest.Js Nest is a framework for building modern Node.js applications with a high-performance architecture that takes advantage of the latest JavaScript features by using progressive JavaScript (TypeScript), functional programming principles, and reactive programming. It combines the best of object oriented programming and functional reactive programming approaches so you can choose your preference without being forced to conform to one particular ideology. Pros NestJS includes a built-in Direct Injection container, which makes it easier to keep your code modular and readable Can create software solutions where the components can be taken out and changed. This means there is no strong coupling between them The use of modular structures simplifies the division of a project into separate blocks. It helps to use external libraries in a project Easy to write simple API endpoints Cons Developers know less about what’s going on under the hood, which means debugging is trickier and takes longer NestJS may be lacking in features compared to frameworks in other languages, such as Spring in Java or .NET in C# Complicated development process To learn more about Nest.Js framework, you can check out the docs here. Moleculer Moleculer is a Node.js framework that helps you to build out microservices quickly and efficiently. It also gives you tools for fast recovery in the event of failure, so your services can continue running efficiently and reliably. Healthy monitoring ensures everything is up to date and any problems are quickly detected and fixed. Pros Fast performance Open source framework Durability Fault-tolerant framework with CB and load-balancer features Cons Lack of Documentation Lack of Community Support Limitations of an enterprise-grade API are that there are limited options for setting up APIs and other restrictions Not as feature-rich as other frameworks To learn more about Moleculer framework, you can check out the docs here. Adding in API Analytics and Monetization Building an API is only the start. Once your API endpoint is built, you’ll want to make sure that you are monitoring and analyzing incoming traffic. By doing this, you can identify potential issues and security flaws, and determine how your API is being used. These can all be crucial aspects in growing and supporting your APIs. As your API platform grows, you may be focused on API products. This is making the shift from simply building APIs into the domain of using the API as a business tool. Much like a more formal product, an API product needs to be managed and likely will be monetized. Building revenue from your APIs can be a great way to expand your business’s bottom line. With a reliable API monetization solution, you can achieve all of the above. Such a solution can easily be purchased as part of an API analytics package. Or, if you have developer hours to spare, many companies opt to build their own solution. Ideally, whatever solution you choose will allow you to track API usage and sync it to a billing provider like Stripe, Recurly, or Chargebee. Wrapping Up In this article, we covered five of the best Node.js frameworks for developing RESTful APIs with JavaScript programming language. We looked at a high-level overview of each and listed out some points for consideration. We also discussed some key factors in how to decide on which API framework to use. Whichever framework you choose, we encourage you to examine the unique needs of your app before choosing.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CEO,
Aista, Ltd
Hiren Dhaduk
CTO,
Simform
Tetiana Stoyko
CTO, Co-Founder,
Incora Software Development