Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report Multi-cloud integration strategies may sound like buzzwords and marketing slang, but in this article, I will demystify them. I will also dive deeply into on-premises and legacy systems and how we can integrate with them. Before we jump into the topic, I would like to define what integration means in a cloud context. Cloud integration is a process that allows organizations' applications, infrastructure, data, and components to properly work together within one or several cloud providers. It also includes connecting on-premises data centers to the cloud if migration can be done across the organization. Cloud Integrations An important part of cloud integration is understanding the strategies. Many medium- and enterprise-level companies choose multi-cloud and hybrid-cloud approaches. Why is successful integration important for companies? Most companies building solutions have to exchange data with on-premises or out-of-support solutions. Properly designed integration solutions will save a lot of time and money. We can see it in the example of a bank multi-cloud application at the end of the article. Hybrid vs. Multi-Cloud Below is a comparison table describing both strategies' pros and cons. Before we jump in, keep the differences between public and private clouds in mind. Remember that public clouds provide computing power, SaaS, and PaaS services for organizations that don't have (or where it is difficult to have) their data centers. A private cloud (on-premises) is an infrastructure the company maintains internally. HYBRID VS. MULTI-CLOUD PROS AND CONS Hybrid Cloud Multi-Cloud Description Hybrid clouds combine private clouds/on-prem data centers with a public cloud, an approach that companies usually take. For example, banks have secure on-prem environments that they won't move to the cloud. Meanwhile, they have other, less secure solutions that can be easily moved to a public cloud and have fewer connections to on-premises. Multi-cloud combines several public clouds without using a private cloud. Usually, companies choose a multi-cloud strategy to avoid vendor lock-in. Pros Flexibility to connect infrastructure that can't be moved to the public cloud. Increased security thanks to the on-prem component. Flexibility between using a legacy system and modern public cloud services. Flexible and scalable environments. You can choose the services in each cloud that work best for your company. Freedom to implement the solution across several clouds. Cons It can be difficult to maintain legacy, on-prem environments. Additional cost for companies because they need to maintain their hardware. The cost of maintaining different services on several cloud providers can be prohibitive. Complexity in managing and separating different services. Securing network communication between clouds can be difficult. Cloud Integration Patterns and Best Practices Applying a good integration strategy also requires knowing some integration best practices and patterns. Cloud Integration Patterns Understanding the main set of integration patterns is key to using existing integration solutions or designing a new one from scratch. Also, having the knowledge of these patterns provides a massive benefit during the integration of cloud applications and enterprise, on-premises infrastructure. Asynchronous Messaging Asynchronous messaging allows components and services to process data without waiting for each other. It also allows components to be decoupled from each other. Figure 1 Shared Databases This pattern uses a shared database to communicate and exchange data between enterprise applications and services. As part of a shared database and communication bus, we can also use an enterprise service bus that can save and exchange data between several components. Figure 2 Remote Procedure Call Remote procedure call (RPC) is an abstraction layer or protocol that allows one network component to communicate with another without knowing the whole network's complete functionality. Figure 3 File Transfer The file transfer pattern provides an interface to share files between cloud or application components. For example, file transfer is useful if an application produces CSV or XML reports — the integration service should adapt this file for other applications. Figure 4 Recommended Practices for Cloud Integration Here are three of the most important best practices for cloud integration: Use native SaaS tools that cloud providers offer. This approach always provides the best integration options between applications and components. There are even "no-code" tools for non-technical people. We will get into native Azure, AWS, and Google Cloud Services in the next section. Use an Integration Platform as a Service (iPaaS). Some services and components provide integration capabilities and are hosted as cloud services. For example, triggermesh and cenit.io are open-source integration platforms that allow building event-driven applications in Kubernetes, orchestrating data flow, and providing API management capabilities in cloud providers and on-premises Use a Function Platform as Service (FPaaS). These platforms provide huge customization levels of integration options, from which some organizations can benefit. This approach is intended for cloud solution architects and requires a knowledge of cloud architecture patterns and function-oriented software development skills. FPaaS tools include AWS Lambda, Azure Functions, Google Cloud Functions, and Apache OpenWhisk. Common Integration Services Knowing the general cloud integration best practices and patterns is crucial. However, knowing what exactly each cloud provider offers is also important. In this section, we will briefly touch upon common cloud integration services from providers such as AWS, Azure, and Google Cloud. Keep in mind: This section contains — but is not limited to — some of the most ubiquitous open-source integration services available. To learn more about the list below, common benefits, and drawbacks associated with each, check out this platform breakdown for more information. AWS AWS has several integration services that provide powerful features alongside simplicity. This list includes SNS (Simple Notification Service), SQS (Simple Queue Service), SWF (Simple Workflow Service), and AWS step functions. To learn more, visit the AWS Application Integration services page. Google Cloud Google Cloud has a vast integration ecosystem, also commonly referred to as Integration Platform as a Service (iPaaS). This provides a set of tools and services to manage and connect applications. The Google Cloud iPaaS contains the following core services: Integration designer, triggers, and tasks. Learn more about each Google Cloud integration service here. Azure Azure offers an Azure integration service set (also commonly referred to as Azure Integration Platform as a Service). This contains a variety of services set up to provide strong integration between applications. Some of the most powerful integration services Azure offers include API Management, Logic Apps, Service Bus, Event Grid, and Azure Arc. If you are interested in reading more on the various Azure integration services, check out this page to learn more. A Bank Multi-Cloud Application As mentioned, banking applications require a massive security layer. Also, many banks contain their own highly secure data centers, and migrating all secured data to the cloud may not even be an option. Figure 5: A banking multi-cloud integration application example In this example, we selected Azure as the cloud for the main application. The application is based on a microservices architecture and is deployed to several Kubernetes clusters. Azure stores secrets, a configuration in a Cosmos DB, and some files in Blob Storage. Azure also provides an observability platform with a service mesh. All secured data is stored on the on-premises data center, and the AWS Cloud part contains a workflow for the call center. Conclusion In this article, we've reviewed top cloud integration patterns and services that start the integration process from scratch or that consider an existing environment. Designing integrations of software solutions in the cloud requires knowledge of best practices and patterns. Furthermore, it requires a deep understanding of the toolsets, services, and components each cloud and framework offer. For example, alongside Azure Arc, AWS offers services like Systems Manager. Before I start an integration project, I'm using the following algorithm: Keep in mind the KISS principle Have a look at existing integration patterns Check on what integration components and services other clouds provide Therefore, multi-cloud integration means to make solutions and components of one cloud provider work with others using existing integration cloud components and patterns. This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
ChatGPT is a cutting-edge natural language processing model created by OpenAI, an artificial intelligence research center founded by Elon Musk, Sam Altman, and Greg Brockman. The model was announced in June 2018, and has continuously been modified and upgraded, with the most recent version, GPT-3, being released in 2020. ChatGPT was made from GPT-3, and was released back in 2022. The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer,” a reference to the model’s deep learning architecture. ChatGPT essentially uses massive amounts of text data to “learn” the intricacies of human language, allowing it to create replies that are frequently impossible to differentiate from those of humans. The ChatGPT technology is based on a type of neural network known as a transformer, which is well-suited to natural language processing applications. ChatGPT’s capacity to create high-quality output to a wide range of prompts you give in the chat box is one of its key features. The response you get feels like it has been generated by a human. You can also specify the conditions of your request. The model was trained on a vast corpus of text, so it can rely on a wide range of data to answer questions, provide information, or participate in casual conversation generated from its neural network. It is crucial to remember; however, that ChatGPT is not flawless. When faced with suggestions that are beyond the model’s area of knowledge, the model may provide illogical or unsuitable replies. ChatGPT usage is sometimes high, and, because of this, there can sometimes be a limit to how many queries you can send. Now, since the main purpose of this article is to give you a comprehensive and hands-on example of how ChatGPT can be used for generating code for a REST API, let’s first understand what a REST API is. A REST API is a type of web service that allows two computer systems to communicate with one another over the internet by following a set of standardized rules. It follows the REST architecture, which is known for its stateless and client-server nature. A REST API works by the client sending a request to the server using a specific URL and HTTP method, such as GET, POST, PUT, or DELETE, and the server responding with a message containing the requested data or indicating whether the requested operation was successful. REST APIs are widely used in modern web and mobile applications because they provide a standardized and efficient method of exchanging data between systems. You can usually learn more about a specific API by reading through the documentation. Some APIs are commercial and usually have different pricing plans. This also means that making a request requires a bearer token. Now comes the interesting part—creating a fully functional REST API in C# for managing students with the help of ChatGPT, so keep on reading. Creating the API In this section, we’ll let ChatGPT code a fully functional REST API for managing students, including creation, deletion, and modification of student records. To do this, we should have a “students controller,” which would have the API endpoint, and a “students class,” which would have the needed fields such as “Id,” “name,” and “age.” The only job we have here is to make logical, well-structured queries which would become gradually more complex as we build the API part by part. Let’s begin. Write a REST API for Managing Students in C# Here’s an example of a RESTful API for managing students in C#. This API will use the HTTP methods GET, POST, PUT, and DELETE to perform CRUD (Create, Read, Update, Delete) operations on a list of student resources. First, let’s define a “student class” to represent a student resource: C# public class Student { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public string Email { get; set; } } Next, let’s create a StudentsController class to handle the HTTP requests: C# [Route("api/students")] public class StudentsController : Controller { private static List<Student> students = new List<Student> { new Student { Id = 1, Name = "John Smith", Age = 20, Email = "john.smith@example.com" }, new Student { Id = 2, Name = "Jane Doe", Age = 19, Email = "jane.doe@example.com" }, new Student { Id = 3, Name = "Bob Johnson", Age = 21, Email = "bob.johnson@example.com" } }; [HttpGet] public IActionResult GetAllStudents() { return Ok(students); } [HttpGet("{id}")] public IActionResult GetStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public IActionResult CreateStudent([FromBody] Student student) { student.Id = students.Max(s => s.Id) + 1; students.Add(student); return CreatedAtAction(nameof(GetStudent), new { id = student.Id }, student); } [HttpPut("{id}")] public IActionResult UpdateStudent(int id, [FromBody] Student student) { var existingStudent = students.FirstOrDefault(s => s.Id == id); if (existingStudent == null) { return NotFound(); } existingStudent.Name = student.Name; existingStudent.Age = student.Age; existingStudent.Email = student.Email; return NoContent(); } [HttpDelete("{id}")] public IActionResult DeleteStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } students.Remove(student); return NoContent(); } This StudentsController class defines the following endpoints: GET /api/students: Returns a list of all students. GET /api/students/{id}: Returns a single student resource by ID. POST /api/students: Creates a new student resource. PUT /api/students/{id}: Updates an existing student resource by ID. DELETE /api/students/{id}: Deletes an existing student resource by ID. Note: we are using the [FromBody] attribute to deserialize the request body into a “student object” for the CreateStudent and UpdateStudent methods. Also note that the students list is static and initialized with some sample data. In a real-world scenario, this data would likely be stored in a database or some other persistent storage. Finally, we’re using the HTTP status codes 200 OK, 201 Created, 204 No Content, and 404 Not Found to indicate the outcome of each operation. Add More Properties to the Student Class Here’s an updated version of the “student class” with some additional properties: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } } In this updated version of the class, we’ve added the following properties: FirstName: The first name of the student. LastName: The last name of the student. Address: The street address of the student. City: The city where the student lives. State: The state where the student lives. ZipCode: The ZIP code of the student’s address. These additional properties can provide more information about the student and can be useful for filtering, sorting, and searching operations in the API. Add Another Property to the Student Class Regarding the Subjects the Student Has Taken Here’s an updated version of the “student class” with an additional property for the subjects the student has taken: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<string> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added a property called SubjectsTaken, which is a list of strings representing the names of the subjects the student has taken. This property can be useful for querying and filtering students by the subjects they have studied. You can add subjects to the list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<string> { "Math", "Science", "History" } }; And, you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change SubjectTaken to an Object Named Subject Containing the Properties Id, Name, Teacher, and Credits Here’s an updated version of the “student class” where the SubjectsTaken property is replaced with a new object subject containing the Id, Name, Teacher, and Credits properties: C# public class Subject { public int Id { get; set; } public string Name { get; set; } public string Teacher { get; set; } public int Credits { get; set; } } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the SubjectsTaken property with a new object subject containing the Id, Name, Teacher, and Credits properties. This new structure can provide more detailed information about the subjects the student has taken, and it can be useful for filtering and querying students based on specific subject properties. You can add a subject to the SubjectsTaken list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change the State Property in the Student Class to Enum C# public enum State { AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the State property with an enum called State. The State enum contains a list of state abbreviations and provides type safety for the State property. You can set the State property for a student like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the State property for a given student like this: C# var student = GetStudent(1); var state = student.State; Add Validation for Making the ID, FirstName, and Email Mandatory Here’s an updated version of the student class that includes validation for the Id, FirstName, and Email properties: C# public class Student { private int _id; private string _firstName; private string _email; public int Id { get { return _id; } set { if (value <= 0) { throw new ArgumentException("Id must be greater than zero."); } _id = value; } } public string FirstName { get { return _firstName; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("First name cannot be null or empty."); } _firstName = value; } } public string LastName { get; set; } public int Age { get; set; } public string Email { get { return _email; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("Email cannot be null or empty."); } _email = value; } } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added private fields for the Id, FirstName, and Email properties and added validation logic to their corresponding setter methods. The Id property must be greater than zero, and the FirstName and Email properties cannot be null or empty. If any of these validation checks fail, an ArgumentException will be thrown. You can create a new student object like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; If you attempt to create a new student object with invalid data, an ArgumentException will be thrown: C# var invalidStudent = new Student { Id = 0, // throws ArgumentException FirstName = "", // throws ArgumentException LastName = "Doe", Age = 20, Email = "", // throws ArgumentException Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject>() }; Make the Students Controller Asynchronous Here’s an updated version of the StudentsController class that makes use of asynchronous methods: C# [ApiController] [Route("api/[controller]")] public class StudentsController : ControllerBase { private readonly IStudentRepository _studentRepository; public StudentsController(IStudentRepository studentRepository) { _studentRepository = studentRepository; } [HttpGet] public async Task<ActionResult<IEnumerable<Student>>> GetAllStudents() { var students = await _studentRepository.GetAllStudentsAsync(); return Ok(students); } [HttpGet("{id}")] public async Task<ActionResult<Student>> GetStudentById(int id) { var student = await _studentRepository.GetStudentByIdAsync(id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public async Task<ActionResult<Student>> CreateStudent(Student student) { if (!ModelState.IsValid) { return BadRequest(ModelState); } await _studentRepository.AddStudentAsync(student); return CreatedAtAction(nameof(GetStudentById), new { id = student.Id }, student); } [HttpPut("{id}")] public async Task<IActionResult> UpdateStudent(int id, Student student) { if (id != student.Id) { return BadRequest(); } if (!ModelState.IsValid) { return BadRequest(ModelState); } var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.UpdateStudentAsync(student); return NoContent(); } [HttpDelete("{id}")] public async Task<IActionResult> DeleteStudent(int id) { var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.DeleteStudentAsync(existingStudent); return NoContent(); } } In this updated version of the class, we’ve added the async keyword to the action methods that interact with the IStudentRepository. We’ve also added the await keyword before calling the asynchronous methods to wait for the result to complete before continuing. If the action method returns a value, we’ve wrapped it in the ActionResult<T> type, where “T” is the type of the returned value. If an error occurs, we return an appropriate ActionResult, such as BadRequest() or NotFound(). Note: the IStudentRepository interface must also be updated to include asynchronous versions of its methods. Here’s an example of what the updated IStudentRepository might look like: C# public interface IStudentRepository { Task<IEnumerable<Student>> GetAllStudentsAsync(); Task<Student> GetStudentByIdAsync(int id); Task AddStudentAsync(Student student); Task UpdateStudentAsync(Student student); Task DeleteStudentAsync(Student student); } In this version of the interface, we’ve added the async keyword to each method and changed the return types to Task<T>, where “T” is the type of the returned value. This allows the calling code to use the await keyword to wait for the method to complete. Conclusion In this article, we used ChatGPT to build a REST API for managing students using ASP.NET Core and C# programming language. This API is a basic implementation of a web service that implements CRUD (Create, Read, Update, Delete) operations for managing a list of students by providing several distinct endpoints. ChatGPT was able to add basic validation to the student class, as well as make the code asynchronous. Apart from that, this AI tool easily changed the type of the state property to enum and created a whole new class when we needed a new object for the subject property. All in all, it did a great job and created fully functional classes, a controller, and a repository. ChatGPT, in my opinion, should be taken seriously as a programming aid, especially for simple and repetitive tasks, since it can automate them and recommend more efficient and effective programming techniques. As a result, it can enable programmers to save time, and they can use that time to solve more challenging and creative programming tasks. However, I also believe it’s important to use it as a complement to human programming skills rather than a replacement since it is oftentimes incapable of providing optimal solutions in complex situations. As for its power, I am confident this article was enough to prove the power of ChatGPT even to the biggest skeptics out there. If you are a developer, you can use ChatGPT for other various use cases, such as to validate or test code. You can even use it for code generation in Python. Many companies are already looking for ways to integrate ChatGPT to their platform. In summary, ChatGPT is an impressive AI language model that has the potential to revolutionize the way we approach programming, and while it may not be able to completely replace human programmers, it can certainly be considered a valuable resource.
Most inter-systems communication components that use REST serialize their payload in JSON. As of now, JSON lacks a widely-used schema validation standard: JSON Schema is not widespread. Standard schema validation allows delegating the validation to a third-party library and being done with it. Without one, we must fall back to manual validation in the code. Worse, we must keep the validation code in sync with the schema. XML has schema validation out-of-the-box: an XML document can declare a grammar that it must conform to. SOAP, being based on XML, benefits from it, too. Other serialization alternatives have a schema validation option: e.g., Avro, Kryo, and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components: gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services. - Why gRPC? Moreover, Protocol is a binary serialization mechanism, saving a lot of bandwidth. Thus, gRPC is an excellent option for inter-system communication. But if all your components talk gRPC, how can simple clients call them? In this post, we will build a gRPC service and show how to call it from cURL. A Simple gRPC Service The gRPC documentation is exhaustive, so here's a summary: gRPC is a Remote Procedure Call framework. It works across a wide range of languages. It relies on Protocol Buffers: Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. - Protocol Buffers It's part of the CNCF portfolio and is currently in the incubation stage. Let's set up our gRPC service. We will use Java, Kotlin, Spring Boot, and a dedicated gRPC Spring Boot integration project. The project structure holds two projects: one for the model and one for the code. Let's start with the model project. I didn't want something complicated. Reusing a simple example is enough: the request sends a string, and the response prefixes it with Hello. We design this model in a dedicated Protobuf schema file: ProtoBuf syntax = "proto3"; //1 package ch.frankel.blog.grpc.model; //2 option java_multiple_files = true; //3 option java_package = "ch.frankel.blog.grpc.model"; //3 option java_outer_classname = "HelloProtos"; //3 service HelloService { //4 rpc SayHello (HelloRequest) returns (HelloResponse) { } } message HelloRequest { //5 string name = 1; //6 } message HelloResponse { //7 string message = 1; //6 } Protobuf definition version Package Java-specific configuration Service definition Request definition Field definition: First comes the type, then the name, and finally, the order. Response definition We shall use Maven to generate the Java boilerplate code: XML <project> <dependencies> <dependency> <groupId>io.grpc</groupId> <!--1--> <artifactId>grpc-stub</artifactId> <version>${grpc.version}</version> </dependency> <dependency> <groupId>io.grpc</groupId> <!--1--> <artifactId>grpc-protobuf</artifactId> <version>${grpc.version}</version> </dependency> <dependency> <groupId>jakarta.annotation</groupId> <!--1--> <artifactId>jakarta.annotation-api</artifactId> <version>1.3.5</version> <optional>true</optional> </dependency> </dependencies> <build> <extensions> <extension> <groupId>kr.motd.maven</groupId> <!--2--> <artifactId>os-maven-plugin</artifactId> <version>1.7.1</version> </extension> </extensions> <plugins> <plugin> <groupId>org.xolstice.maven.plugins</groupId> <!--3--> <artifactId>protobuf-maven-plugin</artifactId> <version>${protobuf-plugin.version}</version> <configuration> <protocArtifact>com.google.protobuf:protoc:${protobuf.version}:exe:${os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:${grpc.version}:exe:${os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Compile-time dependencies Sniff information about the Operating System; used in the next plugin Generate Java code from the proto file. After compilation, the structure should look something like the following: We can package the classes in a JAR and use it in a web app project. The latter is in Kotlin, but only because it's my favorite JVM language. We only need a specific Spring Boot starter dependency to integrate gRPC endpoints with Spring Boot: XML <dependency> <groupId>net.devh</groupId> <artifactId>grpc-server-spring-boot-starter</artifactId> <version>2.14.0.RELEASE</version> </dependency> Here's the significant bit: Kotlin @GrpcService //1 class HelloService : HelloServiceImplBase() { //2 override fun sayHello( request: HelloRequest, //2 observer: StreamObserver<HelloResponse> //3 ) { with(observer) { val reply = HelloResponse.newBuilder() //2 .setMessage("Hello ${request.name}") //4 .build() onNext(reply) //5 onCompleted() //5 } } } The grpc-server-spring-boot-starter detects the annotation and works its magic. Reference classes generated in the above project The method signature allows a StreamObserver parameter. The class comes from grpc-stub.jar. Get the request and prefix it to build the response message. Play the events. We can now start the web app with ./mvnw spring-boot:run. Testing the gRPC Service The whole idea behind the post is that accessing the gRPC service with regular tools is impossible. To test, we need a dedicated tool nonetheless. I found grpcurl. Let's install it and use it to list available services: Shell grpcurl --plaintext localhost:9090 list #1-2 List all available gRPC services without TLS verification. To avoid clashes between gRPC and other channels, e.g., REST, Spring Boot uses another port. Plain Text ch.frankel.blog.grpc.model.HelloService #1 grpc.health.v1.Health #2 grpc.reflection.v1alpha.ServerReflection #2 The gRPC service we defined Two additional services provided by the custom starter We can also dive into the structure of the service: Shell grpcurl --plaintext localhost:9090 describe ch.frankel.blog.grpc.model.HelloService Java service HelloService { rpc SayHello ( .ch.frankel.blog.grpc.model.HelloRequest ) returns ( .ch.frankel.blog.grpc.model.HelloResponse ); } Finally, we can call the service with data: Shell grpcurl --plaintext -d '{"name": "John"}' localhost:9090 ch.frankel.blog.grpc.model.HelloService/SayHello JSON { "message": "Hello John" } Accessing the gRPC Service With Regular Tools Imagine that we have a regular JavaScript client-side application that needs to access the gRPC service. What would be the alternatives? The general approach is through grpc-web: A JavaScript implementation of gRPC for browser clients. For more information, including a quick start, see the gRPC-web documentation. gRPC-web clients connect to gRPC services via a special proxy; by default, gRPC-web uses Envoy. In the future, we expect gRPC-web to be supported in language-specific web frameworks for languages such as Python, Java, and Node. For details, see the roadmap. - grpc-web The description states a single limitation: it works only for JavaScript (as of now). However, there's another one. It's pretty intrusive. You need to get the proto file, generate boilerplate code, and make your code call it. You must do it for every client type. Worse, if the proto file changes, you need to regenerate the client code in each of them. An alternative exists, though, if you're using an API Gateway. I'll describe how to do it with Apache APISIX, but perhaps other gateways can do the same. grpc-transcode is a plugin that allows transcoding REST calls to gRPC and back again. The first step is to register the proto file in Apache APISIX: Shell curl http://localhost:9180/apisix/admin/protos/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d "{ \"content\": \"$(sed 's/"/\\"/g' ../model/src/main/proto/model.proto)\" }" The second step is to create a route with the above plugin: Shell curl http://localhost:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "uri": "/helloservice/sayhello", #1 "plugins": { "grpc-transcode": { "proto_id": "1", #2 "service": "ch.frankel.blog.grpc.model.HelloService", #3 "method": "SayHello" #4 } }, "upstream": { "scheme": "grpc", "nodes": { "server:9090": 1 } } }' Define a granular route. Reference the proto file defined in the previous command. gRPC service gRPC method At this point, any client can make an HTTP request to the defined endpoint. Apache APISIX will transcode the call to gRPC, forward it to the defined service, get the response, and transcode it again. Shell curl localhost:9080/helloservice/sayhello?name=John JSON {"message":"Hello John"} Compared to grpc-web, the API Gateway approach allows sharing the proto file with a single component: the Gateway itself. Benefits of Transcoding At this point, we can leverage the capabilities of the API Gateway. Imagine we want a default value if no name is passed, e.g., World. Developers would happily set it in the code, but any change to the value would require a complete build and deployment. Changes can be nearly-instant if we put the default value in the Gateway's routes processing chain. Let's change our route accordingly: Shell curl http://localhost:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "uri": "/helloservice/sayhello", "plugins": { "grpc-transcode": { ... }, "serverless-pre-function": { #1 "phase": "rewrite", #2 "functions" : [ "return function(conf, ctx) #3 local core = require(\"apisix.core\") if not ngx.var.arg_name then local uri_args = core.request.get_uri_args(ctx) uri_args.name = \"World\" ngx.req.set_uri_args(uri_args) end end" ] } }, "upstream": { ... } }' Generic all-purpose plugin when none fits Rewrite the request. Magic Lua code that does the trick Now, we can execute the request with an empty argument and get the expected result: Shell curl localhost:9080/helloservice/sayhello?name JSON {"message":"Hello World"} Conclusion In this post, we have briefly described gRPC and how it benefits inter-service communication. We developed a simple gRPC service using Spring Boot and grpc-server-spring-boot-starter. It comes at a cost, though: regular clients cannot access the service. We had to resort to grpcurl to test it. The same goes for clients based on JavaScript - or the browser. To bypass this limitation, we can leverage an API Gateway. I demoed how to configure Apache APISIX with the grpc-transcode plugin to achieve the desired result. The complete source code for this post can be found on GitHub. To Go Further os-maven-plugin Maven Protocol Buffers Plugin gRPC-Spring-Boot-Starter
An API (Application Programming Interface) acts as an intermediary between two distinct software applications, enabling seamless communication and data exchange. By providing a standardized interface, APIs offer developers the ability to access specific functionalities or data from another software application or service without the need to understand or modify the underlying code. This results in more efficient development processes, improved interoperability between applications, and enhanced overall functionality. APIs represent a potent resource for developers as they offer a uniform approach to accessing data and functionality from various software applications and services, resulting in the creation of more efficient and effective software solutions. This not only streamlines the development process but also enhances the overall performance and scalability of the resulting applications. As the usage of APIs continues to rise, it is imperative that they are appropriately secured. Many industry-wide threats are the result of excessive or sensitive data being leaked through APIs. To mitigate this risk, it is essential to implement a shift left approach, which involves securing APIs from the development stage to maintenance. By integrating security measures at the earliest stage of the development process, developers can reduce the risk of potential vulnerabilities being introduced into the API. Additionally, regular maintenance and updates can ensure that any new vulnerabilities are identified and promptly addressed. Overall, implementing a shift left approach is critical to ensuring the security and protection of data being transmitted through APIs. Type of APIs: An Overview APIs are categorized into multiple types, depending on the functionality and access. Some of the most commonly used are: Rest APIs Representational State Transfer (REST) APIs are the most commonly used type of API, enabling communication between diverse software systems using standard HTTP requests and responses. SOAP APIs Simple Object Access Protocol (SOAP) APIs are a form of web service that utilizes XML-based messaging protocols for transmitting data between applications. These APIs rely on the Remote Procedure Call (RPC) protocol, which enables applications to request services from each other on a network. GraphQL APIs GraphQL APIs provide developers with a distinct approach to REST APIs by enabling them to define the precise information they require instead of retrieving a determined dataset. Instead, these APIs utilize a query language and a private endpoint to retrieve information from a server. OpenAPI/Swagger APIs OpenAPI/Swagger APIs are used to define and document RESTful APIs, allowing developers to understand the structure and functionality of the API before using it. What Is API Security? API security encompasses a range of measures and best practices aimed at safeguarding APIs and the data transmitted through them against unauthorized access, exploitation, and misuse. Due to their vulnerability to attacks, such as injection attacks and unauthorized access, APIs are particularly susceptible to data breaches and sensitive data exposure. Effective API security protocols aim to prevent such incidents and ensure that APIs remain secure throughout their lifecycle, from development to maintenance. By implementing and adhering to best practices in API security, organizations can ensure that the APIs they use and develop remain secure and that sensitive data transmitted through them is protected from unauthorized access and exploitation. This, in turn, helps to safeguard the overall security and integrity of the organization's digital systems and protects against potentially devastating data breaches. Standards Used: There are multiple standards and frameworks used to secure APIs. Some of the widely used are: Open Web Application Security Project (OWASP) API Security Top 10, National Institute of standards and technology (NIST) API Security, Open-standard Authorization protocol (OAUTH), and Open ID connect. API Security Components Note: The following components have been captured with a focus on security, and no other types of testing or processes have been included. API Security Common Challenges Following are some of the common challenges faced when it comes to API Security: Ensuring only authorized users and applications can access the API is a major challenge in API security, requiring proper authentication and authorization mechanisms. API security must address the risk of sensitive data exposure, including the use of encryption to protect against loss or leakage. Third-Party Library Vulnerabilities: The use of third-party libraries and components in APIs introduces vulnerabilities that attackers may exploit. Organizations must update and patch these components regularly to prevent exploitation. API security must defend against Denial of Service (DoS) attacks, which involve overwhelming the API with a flood of requests to disrupt its normal operation. DoS protection mechanisms are essential to prevent such attacks. Injection attacks seriously threaten API security, as attackers can insert malicious code into API requests to exploit vulnerabilities. As a result, organizations must implement input validation and sanitization techniques to prevent these attacks. Why Do We Need API Security? APIs are the foundation of modern digital ecosystems, enabling seamless communication and data exchange between various applications, services, and devices. However, APIs are also a common target for threat actors due to the sensitive information they carry. Here are some of the reasons why API security is essential. Protect Sensitive data Mitigate Cybersecurity Risks Ensure Compliance Maintain Business Continuity Maintain Trust In recent times we have observed several recent API security-related attacks, some of them are: SolarWinds Supply Chain Attack: In December 2020, it was uncovered that the SolarWinds Orion API had been breached, resulting in the insertion of malevolent code into updates that were disseminated to customers. This enabled cybercriminals to infiltrate the sensitive data and systems of numerous government agencies and businesses. Facebook API Bug: Hackers exploited a Facebook API bug in April 2021, which led to the unauthorized access of personal data belonging to more than 500 million users. This vulnerability enabled attackers to scrape users' birth dates, email addresses, phone numbers, and other confidential information. T-Mobile Data Breach: T-Mobile announced a data breach in August 2021, affecting more than 50 million customers. The cybercriminals exploited an API vulnerability to gain entry to customers' personal data, including social security numbers, birth dates, names, and addresses. considering the constantly evolving threat landscape, securing APIs is of utmost importance to ensure the reliability, privacy, and security of modern digital environments. Best Practices to Be Considered for API Security API security is critical to ensure that the application programming interfaces (APIs) of an organization remain protected from cyber threats. Here are some best practices to be considered for API security: Use strong authentication: use of strong authentication mechanisms (Eg: OAuth 2.0, OpenID Connect, or API keys) can help prevent unauthorized access to APIs. Implement Proper role-based access controls. Encrypt the data: Data at rest or transit to be encrypted. Use HTTPS: Use a secure channel for all API communications. Implement rate limiting and Prevents API Abuse by limiting the number of requests. Follow Secure coding practices: Prevents most of the injection attacks. Real-time monitoring: Real-time monitoring of API traffic can help identify anomalies that could be indicative of a security breach. Security testing Program: to identify vulnerabilities and to ensure that APIs are secure. Incident response plan: To respond quickly to any security incidents that occur. Vulnerability Management: Regular cadence of applying security patches and updates. API Security (Testing Standards) Multiple security frameworks are in use; some of the common frameworks are: Center for Internet Security (CIS) Controls A framework that provides security controls that organizations can implement to improve their overall security posture. National Institute of Standards and Technology (NIST) Cybersecurity Framework A framework that outlines standards, guidelines, and best practices to help organizations manage and mitigate cybersecurity risks. This framework includes a specific emphasis on product security and offers a versatile approach to managing cybersecurity risks. Open Web Application Security Project (OWASP) API Framework for developing secure APIs. I wanted to emphasize more related to this framework As I see this as a base for API security. The OWASP Top 10 serves as a guide for developers and organizations to prioritize and address security risks in their APIs. Below listed is the OWASP API TOP 10 standards list: The OWASP Top 10 API Security Risks is a list of the top ten most critical security risks for APIs, as identified by the Open Web Application Security Project (OWASP). The current version is OWASP Top 10 API Security Risks 2019. Broken Object Level Authorization: When an API fails to authenticate or authorize requests based on the required level of access to perform a particular action, this results in the occurrence of this issue. Broken Authentication and Session Management: The term pertains to security concerns that emerge from the erroneous implementation of authentication and session management. These concerns may include weak passwords, session fixation, and session hijacking, among others. Excessive Data Exposure: This occurs when an API returns too much information in response to a request, including sensitive or confidential data. Lack of Resources and Rate Limiting: The term encompasses problems such as inadequate rate limiting or insufficient allocation of resources. Such problems can lead to excessive resource consumption or denial-of-service attacks. Broken Function Level Authorization: This term pertains to problems concerning the authorization and access controls of specific functions or actions within an API. Mass Assignment: This happens when an API permits users to submit data in large quantities without appropriate validation or filtering. This can lead to the unauthorized manipulation of data and access to sensitive information. Security Misconfiguration: The term refers to security setting misconfigurations that can create vulnerabilities. Injection: This term pertains to problems related to the injection of malicious code or SQL queries into an API's request or response. Improper Assets Management: This issue arises when an API fails to manage its assets correctly, including security credentials and keys, or when it permits unauthorized access to these assets. Insufficient Logging and Monitoring: The term relates to problems regarding inadequate monitoring and logging of API activity, including events that may signal a breach or an attack. Conclusion Therefore, it is crucial to secure APIs against these threats. By implementing proper API security measures, organizations can protect their critical data and systems, mitigate cybersecurity risks, ensure compliance with regulatory requirements, and maintain business continuity.
AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS) that allows you to run code without provisioning or managing servers. Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Together, AWS Lambda and Node.js can be used to create a RESTful API that can be triggered by events such as an HTTP request. Prerequisites Before building a RESTful API with Express.js, you should have the following in place: As Express.js is a JavaScript framework, you’ll need Node.js installed to run it. You can download Node.js from the official website. Text Editor or Integrated Development Environment (IDE): To write and edit your API code, you will need a text editor or IDE. Examples of popular text editors are Sublime Text and Visual Studio Code, while popular IDEs are WebStorm and Visual Studio. In order to write your API, you should have a basic understanding of JavaScript, since Express.js is written in JavaScript. A familiarity with Express.js: Express.js is a web framework for Node.js that helps you build web applications and APIs quickly and easily. The RESTful APIs use HTTP for communication, so you should be familiar with the protocol. It is necessary to have a basic understanding of HTTP methods (GET, POST, PUT, DELETE) and their intended uses, status codes, and the format of HTTP requests and responses. For keeping track of changes to your codebase, familiarity with version control systems (VCS) like Git is helpful. As soon as you have these prerequisites in place, you can start building your RESTful API with Express.js. Adding in The Code 1. Create an AWS_Node folder using the mkdir command in the terminal and cd into the directory. mkdir aws-lambda-express-demo cd aws-lambda-express-demo 2. Create an app.js file in your AWS_Node folder. touch app.js Libraries 1. We’ll use npm to download the latest version of the express package from the npm registry and store it in the node_modules folder in your project’s root directory. The package’s dependencies will also be installed and stored there as well. npm install express 2. Next we’ll install a middleware-compatible framework called serverless-http, a library for creating serverless applications. AWS Lambda allows you to write your application normally, and then wrap it around a function that is exported and executed using an HTTP request. Aside from Azure, Google Cloud, and others, it is also compatible with serverless providers. npm install serverless-http You can install the package globally by running npm install -g serverless-http. Here’s an example of a RESTful API implemented using Node.js with the Express.js framework that implements the GET, POST, DELETE, and PUT methods: This code creates an Express.js app and adds routes for the GET, POST, DELETE, and PUT methods. It uses an in-memory items array to store data and uses the find and findIndex methods to retrieve and update items based on the id provided in the URL. Note that for the POST and PUT routes, you will have to parse the body of the request, which you can do with middleware such as body-parser. const express = require('express'); const app = express(); const serverless = require('serverless-http'); const users = [ { id: 1, name: 'John', company: "ABC Company" }, { id: 2, name: 'Frank', company: "XYZ Inc." }, { id: 3, name: 'Ashley', company: "123 Company" }, ];app.get('/users', (req, res) => { res.json(users); });app.get('/users/:id', (req, res) => { const user = users.find(user => user.id === parseInt(req.params.id)); if (!user) res.status(404).json({ message: 'User not found' }); res.json(user); });app.post('/users', (req, res) => { const user = { id: users.length + 1, name: req.body.name, company: req.body.company, }; users.push(user); res.json(user); });app.delete('/users/:id', (req, res) => { const userIndex = users.findIndex(user => user.id === parseInt(req.params.id)); if (userIndex === -1) res.status(404).json({ message: 'User not found' }); users.splice(userIndex, 1); res.json({ message: 'User deleted' }); });app.put('/users/:id', (req, res) => { let user = users.find(user => user.id === parseInt(req.params.id)); if (!user) res.status(404).json({ message: 'User not found' }); user.name = req.body.name; user.company = req.body.company; res.json(user); }); const handler = serverless(app);const startServer = async () => { app.listen(3000, () => { console.log("listening on port 3000!"); }); }startServer();module.exports.handler = (event, context, callback) => { const response = handler(event, context, callback); return response; }; We’ll write the line console.log("listening on port 3000!"); to indicate that your API is up and running. Finally, module.exports.handler is a function that takes in event, context, and callback arguments, and calls the handler function passing in the event, context, and callback arguments. Running and Testing the Code Start the server by running the following: node app.js Now, our API is up and running. You can send a test HTTP request through Postman. By sending a request to localhost:3000/users, you should see a 200 OK status code. For this test, no request body is needed for the incoming request.
In my previous articles listed below, I have shown how to use Swagger, especially the Springdoc implementation. for doing the code first/bottom-up approach. OpenAPI 3 Documentation With Spring Boot Doing More With Springdoc-OpenAPI Extending Swagger and Spring Doc Open API This time I am writing about the design first/top-down approach. I am not writing about the usual generated Java server, and say, associated Angular TypeScript client code; but first, some background context. Background Some time back I had the opportunity to use PingFederate to solve a business problem for a client of mine(no details due to NDAs). This involved working with the US government’s SSN verification web service and leveraging OIDC for this purpose. The actual code I wrote was just a few Spring Boot classes. The project was more about architecture, integration, infrastructure, etc. When working on this project, I created a side utility. Highlights This is the first time in the PingFed world such a utility has been created. There are some innovative concepts in it. Creating it had some challenges. We will discuss them along with how they were overcome. What Does This Article Offer to the Reader? Speeds up getting the reader started on PingFederate Introduces my utility that helps in meeting this above objective Also showcases two sample applications that demonstrate the Authorization Code Flow: These sample applications are used to demonstrate the effectiveness of our PingFederate configuration. Of particular interest to the reader will be the application that demonstrates my attempt at the authorization code flow using the BFF pattern for the Spring Boot and Angular applications. Note: While these sample applications have been tuned for PingFederate, it should be easy to tweak them for other OIDC providers like Okta, Auth0, etc. Also note: When working on my client's project, there was no front end. It was a machine-to-machine communication project. That said, for most readers, it would be more relevant to have a front end in the examples. Therefore, the two examples do have a front end. A Quick Swagger Recap It supports both the code first/bottom-up and design first/top-down approaches. A Swagger document can be created by using: Swagger Editor Code first libraries like springdoc, SpringFox, Swagger Core, and related libraries that can introspect the actual code The Swagger YAML/JSON document can be visualized using the Swagger UI. This UI is also exposed by the springdoc and SpringFox libraries. Swagger Codegen can be used to generate server/client code. Lastly, there is the SwaggerHub, which leverages all the Swagger tools and offers much more when using the Design First/Top Down approach. What Is PingFederate? PingFederate describes itself as follows: "PingFederate is an enterprise federation server that enables user authentication and single sign-on. It serves as a global authentication authority that allows customers, employees, and partners to securely access all the applications they need from any device. PingFederate easily integrates with applications across the enterprise, third-party authentication sources, diverse user directories, and existing IAM systems, all while supporting current and past versions of identity standards like OAuth, OpenID Connect, SAML, and WS-Federation. It will connect everyone to everything." In my limited context, I used it for OIDC and OAuth purposes. While on the subject of PingFederate, it is not a free product. That said, you can always download and use the latest version of Ping products for free. Trial license files are available. I was able to keep getting new trial license files as needed. I found it very easy to learn. I used PingFederate because, in my client project, some requirements were met better by PingFederate than, say, its cloud-based alternative. What Is the Problem Definition We Are Trying To Solve? Problem Definition: PingFederate Admin API can be used for automating its setup configurations in addition to doing it manually by the admin console. The lack of any programmatic language wrapper makes it hard to administer/configure automatically. Elaborating on the point, just to illustrate the problem: AWS provides SDKs in various programming languages. These SDKs sit on top of the underlying web service API. AWS SDKs It's always easier to use the AWS SDK than work with the underlying web services using Postman/cURL. Similarly for PingFederate A Java Wrapper was achieved: Note: This has been done for the first time in the PingFederate world. :) It is also possible to achieve this in other languages if needed. Is This All That We Did? Is all we did run a Maven-based code generator that reads Swagger specifications of PingFederate Admin API to generate some code and use that? Yes and No. High-Level Solutioning Here, we have 2 flows represented by blue and green arrows. The blue arrows demonstrate: The use of Swagger Core and related code-first annotation-based libraries, causing the automatic generation of the Swagger YAML/JSON Admin API document; this is part of PingFederate itself. This Swagger document is leveraged by the code generator to generate actual code. In our case, we are generating Java REST client code. The green arrows demonstrate: The user interacts with our library: additional convenience code and a particular rest template interceptor. This in turn invokes the generated code. Finally, the PingFederate Admin API is invoked which changes/configures PingFederate. Hurdle in getting this to work: The generated code was not usable in some scenarios. Read more about that and the adopted solution in these Swagger notes on GitHub. In addition to the general approach used, we had to innovate further and resolve the hurdles. That's where the interceptor was leveraged. How To Setup Follow the steps in this GitHub repo. There is a README.md and Setup.md. To summarize, these are the steps. Clone the project. Maven-build the project. Download the ZIP files and license files of PingFederate, PingDirectory. Download a MySQL connector JAR file, also. Verify the downloads. Configure MySQL root user credentials. Install and start PingDirectory and PingFederate using provided Ant script. Launch the PingFederate Admin console for the first time. Maven-build the project with the additional option of generating the Admin API Client code. Use the generated Admin API Client code to administer PingFederate. The code is available on the Git repository. However, let's discuss some code below for better visualization: Java public void setup() throws NoSuchAlgorithmException, KeyManagementException, FileNotFoundException, IOException { String ldapDsId="MyLDAP"; String formAdapterid="HTMLFormAdapter"; String passwordValidatorId="PasswordValidator"; String atmId1="testingATM1"; String policyId1="testingpolicy1"; String ldapAttributeSourceId="mypingfedldapds"; String atmId2="testingATM2"; Properties mySqlProps = PropertiesUtil.loadProps(new File("../mysql.properties")); this.setupDb(mySqlProps); new LdapCreator(core) .createLdap(ldapDsId, "MyLdap", "localhost", "cn=Directory Manager", "manager"); PasswordCredentialValidator passwordCredentialValidator = new PasswordCredentialValidatorCreator(core) .createPasswordCredentialValidator( ldapDsId, passwordValidatorId, passwordValidatorId, "uid=${username}"); IdpAdapter idpAdapter1 = new IdpAdapterCreator(core) .createIdpAdapter( passwordValidatorId, formAdapterid, new String[] {"givenName", "mail", "sn", "uid"}, new String[]{"uid"}, "uid"); IdpAdapterMapping createdIdpAdapterMapping = new IdpAdapterMappingCreator(core).createIdpAdapterGrantMapping(formAdapterid, "username"); new JwtAtmCreator(core) .createJWTATM( atmId1, "jwtatm1", 120, 1, AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName, "iat", "nbf"); new AtmMappingCreator(core) .createTokenMappings( "jwtatm1mapping", AccessTokenMappingContext.TypeEnum.IDP_ADAPTER, formAdapterid, atmId1, new AccessTokenMappingAttribute(null, AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName, SourceTypeIdKey.TypeEnum.OAUTH_PERSISTENT_GRANT, "USER_KEY"), new AccessTokenMappingAttribute(null, "iat", SourceTypeIdKey.TypeEnum.EXPRESSION, "#iat=@org.jose4j.jwt.NumericDate@now().getValue()"), new AccessTokenMappingAttribute(null, "nbf", SourceTypeIdKey.TypeEnum.EXPRESSION, "#nbf = @org.jose4j.jwt.NumericDate@now(), #nbf.addSeconds(10), #nbf = #nbf.getValue()") ); new JwtAtmCreator(core) .createJWTATM(atmId2, "jwtatm2", 5, 2, "iss", "sub", "aud", "nbf", "iat"); new AtmMappingCreator(core) .createTokenMappings("jwtatm2mapping", AccessTokenMappingContext.TypeEnum.CLIENT_CREDENTIALS, null, atmId2, new AccessTokenMappingAttribute(null, "iss", SourceTypeIdKey.TypeEnum.EXPRESSION, "#value = #this.get(\"context.HttpRequest\").getObjectValue().getRequestURL().toString(), #length = #value.length(), #length = #length-16, #iss = #value.substring(0, #length)"), new AccessTokenMappingAttribute(null, "sub", SourceTypeIdKey.TypeEnum.TEXT, "6a481348-42a1-49d7-8361-f76ebd23634b"), new AccessTokenMappingAttribute(null, "aud", SourceTypeIdKey.TypeEnum.TEXT, "https://apiauthete.ssa.gov/mga/sps/oauth/oauth20/token"), new AccessTokenMappingAttribute(null, "nbf", SourceTypeIdKey.TypeEnum.EXPRESSION, "#nbf = @org.jose4j.jwt.NumericDate@now(), #nbf.addSeconds(10), #nbf = #nbf.getValue()"), new AccessTokenMappingAttribute(null, "iat", SourceTypeIdKey.TypeEnum.EXPRESSION, "#iat=@org.jose4j.jwt.NumericDate@now().getValue()") ); new ScopesCreator(core).addScopes("email", "foo", "bar"); new ClientCreator(core) .createClient( AutomationSharedConstants.AuthCodeClientId, AutomationSharedConstants.AuthCodeClientId, AutomationSharedConstants.AuthCodeClientSecret, atmId1, true, null, "http://"+AutomationSharedConstants.HOSTNAME+":8080/oidc-hello|http://"+AutomationSharedConstants.HOSTNAME+":8081/login/oauth2/code/pingfed", GrantTypesEnum.AUTHORIZATION_CODE, GrantTypesEnum.ACCESS_TOKEN_VALIDATION); new ClientCreator(core) .createClient( "manual2", "manual2", "secret", atmId2, true, null, "", GrantTypesEnum.CLIENT_CREDENTIALS); Pair<String, String[]>[] scopesToAttributes=new Pair[] { Pair.with("email", new String[] {"email", "family_name", "given_name"}) }; new OpenIdConnectPolicyCreator(core) .createOidcPolicy( atmId1, policyId1, policyId1, false, false, false, 5, new Triplet [] { Triplet.with("email", true, true), Triplet.with("family_name", true, true), Triplet.with("given_name", true, true)}, AttributeSource.TypeEnum.LDAP, ldapDsId, ldapAttributeSourceId, "my pingfed ldap ds", SourceTypeIdKey.TypeEnum.LDAP_DATA_STORE, new Pair[] { Pair.with("sub", "Subject DN"), Pair.with("email", "mail"), Pair.with("family_name", "sn"), Pair.with("given_name", "givenName") }, scopesToAttributes, true, true, "uid=${"+AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName+"}", "/users?uid=${"+AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName+"}"); } The above is an actual code snippet used by me to administer the PingFederate. As an example, let's look at what is happening in the LdapCreator class createLdap method. Java public DataStore createLdap(String id, String name, String hostName, String userDn, String password) { DataStoresApi dataStoresApi= new DataStoresApi(core.getApiClient()); core.setRequestTransformBeans(new TransformBean("type",type->TypeEnum.LDAP.name())); core.setResponseTransformBeans(new TransformBean("type",type->type.charAt(0)+type.substring(1) .toLowerCase()+"DataStore")); LdapDataStore ldapDataStore = new LdapDataStore(); List<String> hostNames = addStringToNewList(hostName); ldapDataStore.setHostnames(hostNames); ldapDataStore.setType(TypeEnum.LDAP); ldapDataStore.setId(id); ldapDataStore.setName(name); ldapDataStore.setLdapType(LdapTypeEnum.PING_DIRECTORY); ldapDataStore.setUserDN(userDn); ldapDataStore.setPassword(password); DataStore createdDataStore = dataStoresApi. createDataStore(ldapDataStore, false); return createdDataStore; } LdapCreator is a layer that was written on top of the generated code. The classes DataStoresApi, LdapDataStore, and DataStore are the classes from the generated code. In the createLdap method, the lines below are how we instruct the interceptor to transform the request and response. Java core.setRequestTransformBeans(new TransformBean("type",type->TypeEnum.LDAP.name())); core.setResponseTransformBeans(new TransformBean("type", type->type.charAt(0)+type.substring(1).toLowerCase()+"DataStore")); (Again, you can read more about that from the previous link to the Swagger notes on GitHub.) It did something. How do we know it really worked? Does It Really Work? The code base in the repository also contains example code that demonstrates Authorization Code Flow. The example code projects can be set up and run using their Readme.md. The example code projects also serve the purpose of demonstrating that our PingFederate setup worked, in addition to being hopefully useful. The Example Code Projects There are two examples: simple-oidc-check springboot.oidc.with.angular The example simple-oidc-check is a roll-your-own example. It will demonstrate the Authorization Code Flow and also the Client Credentials grant flow. It can be used to better understand many different concepts including JEE and OIDC. There are some concepts there that might raise your eyebrows and are not so often seen. The example springboot.oidc.with.angular is an Authorization Code Flow BFF pattern implementation. This is often considered the most secure approach because the access token is kept only at the back end. The access token never reaches the JavaScript/HTML layer. This and other approaches are also discussed in the example code Readme.md. Supported Versions The versions of PingFederate supported by this utility are detailed here. Future Vision I created this utility mainly because it helped me stand up my PingFed PoCs rapidly when working on a client project. I will try maintaining it as long as it does not tax me too much and PingFederate itself does not provide similar solutions. I can already think of some more improvements and enhancements. I can be encouraged to maintain and carry on with it with stars, likes, clones, etc. on the Git repository.
Connectivity is so daunting. By far, we are all used to instant connectivity that puts the world at our fingertips. We can purchase, post, and pick anything, anywhere, with the aid of desktops and devices. But how does it happen? How do different applications in different devices connect with each other? Allowing us to place an order, plan a vacation, make a reservation, etc., with just a few clicks. API—Application Programming Interface—the unsung hero of the modern world, which is often underrated. What Is an API? APIs are building blocks of online connectivity. They are a medium for multiple applications, data, and devices to interact with each other. Simply put, an API is a messenger that takes a request and tells the system what you want to do and then returns the response to the user. Documentation is drafted for every API, including specifications regarding how the information gets transferred between two systems. Why Is API Important? APIs can interact with third-party applications publicly. Ultimately, upscaling the reach of an organization’s business. So, when we book a ticket via “Bookmyshow.com,” we fill in details regarding the movie we plan to watch, like: Movie name Locality 3D/2D Language These details are fetched by API and are taken to servers associated with different movie theatres to bring back the collected response from multiple third-party servers. Providing users the convenience of choosing which theatre fits best. This is how different applications interact with each other. Instead of making a large application and adding more functionalities via code in it. The present time demands microservice architecture wherein we create multiple individually focused modules with well-defined interfaces and then combine them to make a scalable, testable product. The product or software, which might have taken a year to deliver, can now be delivered in weeks with the help of microservice architecture. API serves as a necessity for microservice architecture. Consider an application that delivers music, shopping, and bill payments service to end users under a single hood. The user needs to log into the app and select the service for consumption. API is needed for collaborating different services for such applications, contributing to an overall enhanced UX. API also enables an extra layer of security to the data. Neither the user’s data is overexposed to the server: nor the server data is overexposed to the user. Say, in the case of movies, API tells the server what the user would like to watch and then the user what they have to give to redeem the service. Ultimately, you get to watch your movie, and the service provider is credited accordingly. API Performance Monitoring and Application Performance Monitoring Differences As similar as these two terms sound, they perform distinctive checks on the overall application connectivity: Application performance monitoring: is compulsory for high-level analytics regarding how well the app is executing on the integral front. It facilitates an internal check on the internal connectivity of software. The following are key data factors that must be monitored: Server loads User adoption Market share Downloads Latency Error logging API performance monitoring: is required to check if there are any bottlenecks outside the server; it could be in the cloud or load balancing service. These bottlenecks are not dependent on your application performance monitoring but are still considered to be catastrophic as they may abrupt the service for end users. It facilitates a check on the external connectivity of the software, aiding its core functionalities: Back-end business operations Alert operations Web services Why Is API Performance Monitoring a Necessity? 1. Functionality With the emergence of modern agile practices, organizations are adopting a virtuous cycle of developing, testing, delivering, and maintaining by monitoring the response. It is integral to involve API monitoring as part of the practice. A script must be maintained in relevance to the appropriate and latest versions of the functional tests for ensuring a flawless experience of services to the end user. Simply put, if your API goes south, your app goes with it. For instance, in January 2016, a worldwide outage was met by the Twitter API. This outage lasted more than an hour, and within that period, it impacted thousands of websites and applications. 2. Performance Organizations are open to performance reckoning if they neglect to thoroughly understand the process involved behind every API call. Also, API monitoring helps acknowledge which APIs are performing better and how to improvise on the APIs with weaker performance displays. 3. Speed/Responsiveness Users can specify the critical API calls in the performance monitoring tool. Set their threshold (acceptable response time) to ensure they get alerted if the expected response time deteriorates. 4. Availability With the help of monitoring, we can realize whether all the services hosted by our applications are accessible 24×7. Why Monitor API When We Can Test it? Well, an API test can be highly composite, considering the large number of multi-steps that get involved. This creates a problem in terms of the frequency required for the test to take place. This is where monitoring steps in! Allowing every hour band check regarding the indispensable aspects. Helping us focus on what’s most vital to our organization. How To Monitor API Performance Identify your dependable APIs—Recognize your employed APIs, whether they are third-party or partner APIs. Internally connecting or externally? Comprehend the functional and transactional use cases for facilitating transparency towards the services being hosted — improves performance and MTTR (Mean Time to Repair). Realize whether you have test cases required to monitor. Whether you have existing test cases that need to be altered, or is there an urgency for new ones to be developed? Know the right tool—API performance monitoring is highly dependent on the tool being used. You need an intuitive, user-friendly, result-optimizing tool with everything packed in. Some commonly well known platforms to perform API performance testing are: CA Technologies(Now Broadcom Inc.) AlertSite Rigor Runscope One more factor to keep a note of is API browser compatibility to realize how well your API can aid different browsers. To know more about this topic, follow our blog about “API and Browser Compatibility.” Conclusion API performance monitoring is a need of modern times that gives you a check regarding the internal as well as external impact of the services hosted by a product. Not everyone cares to bother about APIs, but we are glad you did! Hoping this article will help expand your understanding of the topic. Cheers!
API-First is an approach to software development that emphasizes designing and developing the API as the primary focus of development. This approach offers many benefits, including increased flexibility, reduced development time, increased reliability, and easier testing. By developing the API first, developers can create a stable and consistent API that can be used by multiple clients and platforms. What Is API-First? API-First is an approach to software development that emphasizes designing and developing an Application Programming Interface (API) as the first step in the process. Instead of designing and developing the user interface or other aspects of the application first, the API is the primary focus of development. This approach has become increasingly popular as more applications are developed to work with multiple devices and platforms. In an API-First approach, the API is designed and developed independently of any specific client or user interface. The API is the contract between the server and the client, defining the data format, behavior, and methods available to the client. The API is often developed using a language-agnostic description format, such as OpenAPI or Swagger, which can be used to generate client libraries in multiple languages. How Does API-First Work? API-First is an approach to software development that emphasizes designing and developing an application’s API (Application Programming Interface) before any other parts of the application. The basic idea behind API-First is to create a well-defined API that can act as a stable contract between different parts of the application and with external services. Here are the key steps to implementing an API-First approach: Define the API: Start by defining the API contract, including the resources, endpoints, and data formats that will be used by the API. Test the API: Test the API to ensure it meets your functional requirements and performs as expected. This can be done using a variety of tools, including manual testing, automated testing, and API mocking. Implement the API: Once the API is defined and tested, it can be implemented using a variety of tools and technologies, such as serverless functions, containers, or microservices. Build the application: With the API in place, the rest of the application can be built around it, using the API as a stable contract for communication between different components. Deploy the application: Once the application is complete, it can be deployed to a production environment, where it can be accessed by external services and users. Benefits of the API-First Approach The benefits of an API-First approach are numerous. Here are a few: Improved flexibility: An API-First approach enables an application to work with multiple clients, including web browsers, mobile devices, and third-party applications. Since the API is designed and developed independently of any specific client or user interface, changes to the user interface or client application do not require modifications to the API. Reduced development time: By developing the API first, developers can focus on creating a stable and consistent API. Once the API is in place, client developers can start building their applications, which can reduce the overall development time of the project. Increased reliability: By focusing on the API first, developers can identify and address any potential issues with the API before building any client applications. This can result in a more reliable API and a better experience for the end-user. Easier testing: An API-First approach enables developers to test the API independently of the client application. This can make it easier to identify and isolate issues and ensure the API is functioning as expected. Better security: A well-designed API can help to improve the security of the application by enforcing access controls and limiting the exposure of sensitive data. Improved collaboration: By defining the API first, developers can work together more effectively, regardless of their technology stack. 5 Principles of API-First Development API-First development is an approach to software development that involves designing an API before implementing the user interface or any other part of the application. This approach prioritizes the API as the primary interface for the application and ensures the API is well-designed, scalable, and secure. Here are the five principles of API-first development: 1. Design for the Consumer API-First development starts with understanding the needs and requirements of the API’s users. Developers must design APIs that are easy to use, intuitive, and provide clear documentation. This involves understanding the use cases, business processes, and user stories the API will support. By designing for the consumer, developers can ensure the API meets the needs of its users and can be easily integrated into other systems. 2. Embrace Open Standards API-First development involves using open standards, such as REST, JSON, and OAuth, to ensure the API is interoperable with other systems. Open standards ensure the API can be easily integrated into third-party systems and can evolve and adapt over time. By embracing open standards, developers can avoid vendor lock-in and ensure the API is future-proof. 3. Focus on Scalability API-First development involves designing APIs that are scalable and can handle large volumes of traffic. This involves using a scalable architecture, caching, load balancing, and other techniques to ensure the API can handle the demands of its users. By focusing on scalability, developers can ensure the API is reliable, performant, and can handle peak loads. 4. Ensure Security API-First development involves designing APIs that are secure and protect user data. This involves using authentication, encryption, rate limiting, and other security measures to ensure the API is protected against malicious attacks. By ensuring security, developers can build trust with their users and ensure their data is safe and secure. 5. Test and Iterate API-First development involves testing the API early and often to ensure it meets the needs of its users. This involves using automated testing, continuous integration, and other testing techniques to ensure the API is reliable, performant, and scalable. By testing and iterating, developers can ensure the API is continuously improving and meeting the changing needs of its users. In summary, API-First development is an approach to software development that prioritizes the design and development of the API before any other part of the application. By following these five principles, developers can design and build APIs that are easy to use, scalable, secure, and reliable. Summary API-First development is an approach to software development that involves designing and developing an API before implementing other parts of the application. This approach prioritizes the API as the primary interface for the application and ensures the API is well-designed, scalable, and secure. The five principles of API-First development include: Designing for the consumer. Embracing open standards. Focusing on scalability. Ensuring security. Testing and iterating. By following these principles, developers can build APIs that are easy to use, scalable, secure, and reliable.
In addition to the third article of the series, this article covers some more case studies related to To Do Tasks. For this use case, I used the MS Graph Java SDK 5.42.0. If you haven’t read parts 1 and 2 of this series, here they are: Part 1: “Commonly Occurring Errors in Microsoft Graph Integrations and How To Troubleshoot Them (Part 1)” Part 2: “Commonly Occurring Errors in Microsoft Graph Integrations and How To Troubleshoot Them (Part 2)” Read Tasks With a Specific Time Zone Microsoft Graph Todo Task is a powerful tool for developers to integrate Microsoft To Do task management software into their applications. It enables them to create, update, and retrieve tasks using the Microsoft Graph API. One of the essential considerations when working with task management tools, like Microsoft To Do, is ensuring they operate in the correct time zone. Time zone plays a crucial role in task management tools because they rely on deadlines and reminders to keep users on track. If a user creates a task and sets a deadline or reminder, it must be displayed accurately based on the user’s time zone. Developers working with the Microsoft Graph Todo Task must ensure their applications handle time zones correctly to avoid confusion and ensure that tasks are displayed accurately. Microsoft Graph API provides built-in functionality for handling time zones. When creating or updating tasks, developers can set the time zone information for the task. The API supports the standard time zone format (e.g., “America/Los_Angeles”) and the Windows time zone format (e.g., “Pacific Standard Time”). Developers can also use the Microsoft Graph API to retrieve the user’s time zone information. This can be useful when displaying tasks and reminders in the correct time zone. By default, the API returns the user’s time zone in the IANA time zone format (e.g., “America/Los_Angeles”). However, developers can also request the user’s time zone in the Windows time zone format. When retrieving tasks using the Microsoft Graph API, developers can also specify the time zone in which they want to retrieve the tasks. This can be useful when displaying tasks to users in different time zones. By default, the API returns tasks in the user’s time zone. However, developers can also specify a different time zone using the “Prefer” header in the API request. For instance, in Java: LinkedList<Option> requestOptions = new LinkedList<>(); requestOptions.add(new HeaderOption("Prefer", "outlook.timezone=" + "\"" + timeZone + "\"")); Beside the header adjustments, let’s take a look at how to create a ToDo task with a specific time zone: Java String timeZone = "Pacific Standard Time"; //replace with the desired timezone String apiEndpoint = "https://graph.microsoft.com/v1.0/me/tasks"; String json = "{\n" + " \"subject\": \"Complete Task\",\n" + " \"body\": {\n" + " \"contentType\": \"Text\",\n" + " \"content\": \"This is a task with specific time zone\"\n" + " },\n" + " \"dueDateTime\": {\n" + " \"dateTime\": \"2021-12-01T14:30:00\",\n" + " \"timeZone\": \""+timeZone+"\"\n" + " }\n" + "}"; HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(apiEndpoint)) .header("Authorization", "Bearer " + accessToken) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(json)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); System.out.println(response.body()); In the above code, replace the value of the timeZone variable with the desired time zone. Also, replace the subject and body properties of the task object with your own values. The dueDateTime property of the task object contains a dateTime and timeZone property. Set the dateTime property to the desired due date and time in ISO format. Set the timeZone property to the desired time zone. Time zone is a crucial consideration when working with task management tools like Microsoft To Do. Developers working with the Microsoft Graph Todo Task must ensure their applications handle time zones correctly to avoid confusion and ensure tasks are displayed accurately. Microsoft Graph API provides built-in functionality for handling time zones and supports DST, making it easy for developers to create applications that work with tasks in different time zones. Read the Changekey Property of ToDo Tasks Based on a documented issue on Github from November 2020, there are missing properties from the Microsoft.Graph.todoTask resource type as compared to the Microsoft.OutlookServices.Task type. Some of these properties are absolutely necessary for business logic and play a key role, according to the comments of the issue: property Description AssignedTo The name of the person who has been assigned the task. Attachments The collection of FileAttachment and ItemAttachment attachments for the task. Categories The categories associated with the task. ChangeKey The version of the task. HasAttachments Set to true if the task has attachments. Owner The name of the person who created the task. ParentFolderId The unique identifier for the task’s parent folder. Sensitivity Indicates the level of privacy for the event: Normal, Personal, Private, Confidential. StartDateTime The date in the specified time zone when the task is to begin. By now, not all requested properties are available for use in Microsoft Graph v1.0. In the current version of the MS Graph Java SDK 5.42.0, the following properties are included within the default data model: Property Attachments Categories HasAttachments Owner StartDateTime In this use case, I will show how to read the changeKey from ToDo Tasks via the MS Graph APIs and the Java SDK. The changeKey is part of the ETag that comes with the ToDo Task. To retrieve the ETag value for a ToDo task via Microsoft Graph, you can use the GET method on the tasks endpoint, along with the task ID. Here is an example URL to retrieve the ETag value for a specific task: https://graph.microsoft.com/v1.0/me/tasks/{task_id}?$select=etag Replace {task_id} with the ID of the task you want to retrieve the ETag value for. The $select=etag query parameter will ensure that only the ETag value is returned in the response. How do you read the changeKey from the ToDo task via the MS Graph Java SDK? The SDK contains the Java class AdditionalDataManager. The class holds additional properties that are not part of the default object’s schema, according to the Microsoft documentation. We can read all JSON elements provided by the response from the API and which are not included within the default data model. Let’s take a look at the following sample: Java public static final String ODATA_ETAG = "@odata.etag"; public static final String PREFIX = "W/\""; public static final String SUFFIX = "\""; String eTag = todoTask.additionalDataManager().get(ODATA_ETAG).getAsString(); String changeKey = convertETagToChangeKey(eTag); private String convertETagToChangeKey(String eTag) { String changekey = ""; if (Objects.nonNull(eTag) && eTag.startsWith(PREFIX)) { String noPrefixSubString = eTag.substring(PREFIX.length()); if (noPrefixSubString.endsWith(SUFFIX)) { return noPrefixSubString.substring(0, noPrefixSubString.length() - 1); } } return changekey; } In this sample, we use the key @odata.etag to read the value of the ETag and remove the prefix and suffix of the ETag to get the value of the changeKey. Conclusion At this point, you should have a better understanding of the common integration errors that may be seen in Microsoft Graph integrations and how to troubleshoot them. I hope this article was informative and useful. Feel free to comment below and share this article!
API-led connectivity is the most important design pattern facing IT today. It is the key to digital transformation and an IT strategy that unlocks an organization's products and services to its consumers. There are many aspects to this pattern, so in this article, I will put down some key tenants and a few ideas from this. Then, I hope, through comments and conversations, I can begin a discussion and follow up with more articles in the future. As a Senior Solution Architect in EPAM's API Practice, the most impactful change that we can bring about is the adoption of API-led connectivity. Most organizations have APIs to some degree, often evolving out of an SOA or microservices approach. APIs are brought into an organization, one at a time, so that, from natural evolution, there will be many APIs, but they won't follow an API-led connectivity approach. The next step is to follow this standard, but there is often resistance, as in all changes. This leads organizations down a point-to-point path of connecting their systems, which is an anti-pattern in the API world. This was like the days of client-server applications, which were ultimately replaced by a model-view-controller design pattern. API-led connectivity has a similar concept. In the model-view-controller, the users of the system connect at the view layer, the controller manages the orchestration, and the model connects to the data. Similarly, in API-led connectivity, there are also three layers; experience (view), process (controller), and system (model). These three layers allow you to de-couple the consumers of your application through the experience layer and your systems of record in your system layer. Your business logic and orchestration sit in the process layer. There are many benefits from this, as I'll outline below. The following diagram shows an example of API-led connectivity. 1. Experience Layer You can apply security to different application consumers depending on who they are at the experience layer. These consumers could be web, mobile, and third parties who could be either internal or external to your organization. You can provide multiple security policies such as client id, certificates, OAuth, and multiple SLA tiers based on subscriptions. You can manage these consumers by providing or withdrawing access to your applications, authorizing access, and monitoring these APIs to measure the volumes and throughput. This also makes it easy to monetize your products or services at this layer. 2. Process Layer The process layer will contain your business capabilities and can be divided into various domains. Each line of business can have its own APIs defining the product or services they provide. For example, you would have a set of APIs defining the customer, products, or billing. These APIs at the process level are accessed by consumers from the experience layer. In addition, the process APIs can call other APIs at the process layer or can call on APIs at the system layer, which provides or update information from your systems of record. 3. System Layer The System layer exposes information from your various systems of record. These will be your legacy systems, your databases, CRMs (Customer Relationship Management) like Salesforce, and ERPs (Enterprise Resource Planning) like SAP. For example, you could add queues, caches, timeouts, and circuit breakers at this layer if you are experiencing performance issues. In addition, some API framework vendors automatically create this layer and use AI to improve performance, eliminate redundancies, and remove unused functionality. Large Organizations The API-led connectivity approach is beneficial in a large organization where you'll have multiple development teams. Your different lines of business can each work on APIs within their own domain in the process layer. For example, your web, mobile, or third-party partners can connect to APIs at the experience layer. Likewise, the system layer can be managed by Central IT groups associated with your various systems of record. Pros and Cons Some common complaints I hear related to this three-layered approach are that there are multiple networks hops from layer to layer, and there is an added complexity to this approach. These are the same questions I heard when we moved to model-view-controller or when we used to have different servers for the database and the application. However, a well-designed application will always trump a few milliseconds of performance. An API-led connectivity approach can lead to improved performance by adding caching, spike control, and monitoring of multiple consumers and right-sizing the security on the system layer. Also, note that the security is usually kept at the experience and system layers, whereas the process layer is usually secured with a faster client id and password level security. This can lead to the overall faster performance of your systems. Two other important benefits of an API Led approach are reusability and the ability to quickly plug in new consumers and systems of record. Summary This is a high-level view of what API Led Connectivity is and why you should use it. Do we need an experience layer? Should the experience layer be divided by domain, channel, or partner? How should you model the data in your system APIs? In the future, I'll go into more detail with use cases, examples of each layer, and how this can help your organization. Please share your experience with API-led connectivity, including both benefits and consequences.
John Vester
Lead Software Engineer,
Marqeta @JohnJVester
Colin Domoney
API Security Research Specialist & Developer Advocate,
42Crunch
Saurabh Dashora
Founder,
ProgressiveCoder
Cameron HUNT
Integration Architect,
TeamWork France