Code efficiency

General ideas ->

When I talk to developers about working on efficiency and optimization of applications, and if they haven’t done this kind of work before, they’re often intimidated (tường tận)by it and I will hear the same two misconceptions(hiểu sai), two totally understandable but thankful incorrect assumptions about what this work really involves.

attempting to tweak(vặn, chỉnh) and improve every tiny part of it.

Trying to squeeze(ép, vắt, nén) every last ounce of improvement out of every single line.

Because the likelihood is that even in a large application

nobody actually uses the string class, that’s just for noobs.

And we’re going to have to replace everything with a collection of of obscure methods buried in complex classes in some esoteric framework we never even heard of before

But, being an expert is not necessary. And in one way may even be detrimental because the attitude we should bring to the first steps of this is one of humility, not a word easily associated with software developers. But, as we’ll see, one that can be very useful.

medical ethics

of looking at the blank page in the blinking cursor, your focus should be on making this code clear, accurate, precise, readable, understandable, modular. Working on that will never be a waste of your time. What I didn’t say was, focus on making it fast.

 

It was a waste of time. And there are many optimizations like that, that have no impact on the experience of the user, and importantly, no possible impact on the objective tasks that could be accomplished. The developer fixed an imaginary problem. They totally wasted their time.And that was time that wasn’t spent somewhere else fixing a real problem

You would be mistaking the symptom for the problem, and the same symptom can manifest for very different problems. A user interface that freezes for a second might be a memory issue in one case, a threading issue in another, a network issue in another, an algorithm issue in another.  We must first ask, where is this application running? What is it doing? What can we predict about that? And importantly, is there anything we control about this? Because if we do not ask those questions, and we just treat all applications the same, we will miss opportunities for those big efficiency improvements.

 

People are expensive, and hardware is cheap And sometimes the most efficient thing to do is a hardware improvement, not development time.

So we need to stay very conscious of theimpacts our decisions have, and what is under our control.

Always look for the easy win, the simplest fix that we’ll get us the biggest results.

 

Strategies for memory efficiency

We profile the application. We need some kind of data over time, even if that’s just a few second’s worth. So we’ll use a Profiling tool of some kind.

We have the idea of leaked growth, you have an actual memory leak somewhere, whether it’s fast or whether it’s slow, but objects have been created and discarded but never reclaimed(cai tao). And leaks will leave their own kind of memory foot print, when we’re looking at them, when we’re profiling, as we’ll see in a moment. Then we have real legitimate(co ly, chinh dang) growth.

 

So I’d like to see a memory allocation graph that looks a little bit like this. We’re spiking up, but we’re bringing it down to the same baseline level.

 

I’ll actually allow this photo field to sit empty in this object, until I need it. And the only time I need that image object, that photo, is if somebody asks for it. So I’m going to change the property, the getter for that field to a lazy instantiation version. And all that means is I’m first going to ask, hey, somebody’s looking for this photo object. Well, is photo null? Is there actually in object in that field? If it is null, I’m going to run exactly the same code I would have run in the constructor, to create that photo object and store it, and then I’ll go ahead and return it.

 

Refactory

One, refactoring is not debugging. Your code already needs to work.

But refactoring is not a method for a bug hunt. It’s not a way to find either large showstopper bugs, you should have fixed those as soon as they happened.

Refactoring is not performance. This is another very common misconception. That we do this to improve our code. We’re cleaning our code up and that’s going to make it faster. No, code performance is not the goal of refactoring nor, let me be quite clear here, is it even an expectation.

 

Refactoring gives you a great way to have those kinds of conversations, and come up with answers that are more than just somebody’s gut feeling. Now, if you’re a solo developer, you may not have that option available to you. So you need to pay a little more attention to applying the individual techniques, particularly as you encounter ones that you wouldn’t naturally think about. But here’s the thing. We are not trying to get to some point where we take every line of code and have to go around a hundred different formal techniques to see which one applies. We don’t do that at all, instead we’re going to see if our code smells bad.

 

Code smell

  1. Duplicated code
  2. very long method. Suddenly going from methods with five orten lines to one with 80
  3. Too many comments

 

Using the Extract Method refactoring

To look at some code, identify a few lines that logically belong together and create a new method from those lines.

 

try and force yourself to err on the side of smaller is better for a while. One of the benefits is reuse. The more specific modular methods you write. The more opportunities you’ll find for reusing that method in another location. And if you use inheritance, it’s much easier to override a method if that method is defined very specifically

Refactorings that remove temps

make sure there’s only one place we’re setting the value of this temp, that we’re not changing it to mean something else later on in the method.

you’re absolutely right, but remember, the pure efficiency of the code is not our first goal in refactoring. Clarity is. The likelihood is that a typical expression you would deal with in this sort of refactoring is going to be soundemanding, it wouldn’t be noticeable at all, even having to call it several more times.

 inline temp refactoring.

used as part of the replace temp with query refactoring

Refactorings that add temps

Remove Assignments to Parameters, meaning if you are passing parameters into a method, be very aware of the impact of any assignment, any change you make to those parameters

->

 Code efficiency

My requirement is to validate a field for alphabets and special characters.

ABAP Documentation: Some special characters for Single Character Strings

[[:blank:]] Name for blank characters and horizontal tabulators in a value set
[[:cntrl:]] Name for all control characters in a value set
[[:digit:]] Name for all digits in a value set

regexp01

 

 


regexp02

boost your productivity

I was a really disorganized kid. My room looked like a war zone, my internal clock was always about 30 minutes late, and my life was a series of minor emergencies strung together by random chance. I spent so much time trying to keep up with my day-to-day responsibilities that my bigger life goals seemed completely out of reach.

By the time I hit my early twenties, the frustration of always feeling behind and overworked reached a breaking point, so I started searching for a solution. What I found were countless productivity philosophies and tools that promised to organize my life.

After years of trial and error I ended up distilling them all down tosix helpful habits that keep me productive and on track.

1. Make productivity personal

Calendars and to-do lists are must-have tools to organize your life, but if they don’t feel comfortable to you, you just won’t use them. It’s easy to become distracted by all the shiny apps, calendars, and task management systems out there, but the only one you need is the one that works best for you.

If you’re techie who wants to use mobile or desktop apps to save ideas and sync your task lists and calendar items, Wunderlist is a fantastic cross-platform task management system. All modern smartphones have a calendar and an easy-to-use reminder or task list app onboard.

If you’re more of a pen-and-paper person, there’s nothing wrong with a dedicated planning folio or simple steno pad.

2. Plan your work, work your plan

I love this phrase. It stresses the importance both of setting those big, lofty goals, and of then breaking them down into small, bite-sized, attainable steps. Once you’ve figured out how to manage your life, you need to be ruthless and unyielding in doing it: Make productivity a daily habit.

Before you start dumping random to-do items and events on your lists and calendars, take time and write down your biggest goalsor aspirations. I’m not talking about getting milk at the store or picking up the mail, but your really big goals, like getting a new job, writing your first novel, or perhaps buying a house.  Putting your ‘big picture’ goals down clearly is a great way to make sure that as you hustle and bustle in your daily life you’ll always be able to look up from around yourself and keep an eye on the things you reallywant to accomplish.

Next, break your big goals into smaller steps. If you want to buy a house, lining up your finances or getting a loan pre-approval is a reasonable first step. Perhaps you could also start checking nearby real estate listings each day to get a better idea of price ranges and availability. Each small step gets you closer to your bigger goal, and builds confidence in the momentum you’re generating.

We’re not talking hours of investment here. Just 5-10 minutes at the beginning or end of each day reviewing your to-do list and calendar is all it takes to order your tasks and set up the upcoming day to accommodate them.

By clarifying your long-term, big-picture goals and then regularly breaking them down into short-term steps, you’ll find yourself more focused, less overwhelmed, and better poised for success.

3. Capture everything, always

The human brain is an amazing computing machine, but it’s also prone to overload. Inspiration can fade just as quickly as it arrives, so don’t trust your memory; capture ideas in your to-do list as soon as they strike.

If your bigger goals are clearly in perspective and you’ve started working towards them, expand your scope to capture smaller thoughts and ideas, too, like changing the water filter or finding a spare charging cable for your phone. Once you get in the habit of saving your thoughts—as formless or vague as they may be—you’ll relieve yourself of the responsibility of having to remember them.

The key, however, is doing something with those thoughts. During your daily reviews, add important dates to your calendar, key goals to your to-do list, and contacts to your address book. Prune your to-do list of frivolous ideas and prioritize the good stuff often with a keen eye towards your big goals—but don’t limit what you jot down. It’s far easier to delete a bad idea than to remember a good idea you’ve forgotten.

4. Time-boxing for the win

Now you’ve got a big list of important things to do. It can be overwhelming to consider doingeverything on it—particularly if your daily calendar is usually packed. Time-boxing, or scheduling time for common or regular tasks, is an easy and effective way to plan your progress, even with a busy schedule.

First, group together common or related tasks like phone calls, errands, meetings, emails you need to send, shopping items, et cetera. Then find available spots on your calendar you can block out in order to work on them. It may feel odd to schedule yourself, but if you don’t make time for your own progress, no one else will do it for you.

As the weeks go by, you’ll know if you’re making progress on your goals or not, and can adjust in the coming weeks to accommodate for a bit more or less ‘personal’ time. The more you treat your schedule as a guided roadmap for making progress on your goals, the less stressed you’ll feel; you’ll always know what’s ahead, and be better able to focus on what’s important.

5. Hold yourself accountable

No matter how good your productivity plan is, you’re bound to miss things here and there, or let things slip a bit from week to week. Everyone’s human, after all.

But don’t use that as an excuse. If you find yourself procrastinating, ignoring items on your lists, or your projects start slipping behind, use reminders to keep yourself honest.

If you’re a paper-planner person, this could be as simple as scheduling your to-dos right onto your calendar to remind you when they need to be finished (or started). If you use mobile or desktop apps to manage your goals and calendar, it’s usually dead simple to set an alarm or reminder to trigger you. But be sure to have the right frame of mind with this: Your goal is to remind yourself of things you’ve already found important, not to boss yourself endlessly or add stress to your week.

6. Finish everything you start

The most successful people I’ve known are “closers” who finish whatever they start. How many times have you thrown dishes in the sink, planning to get back to them later? Or let that email you didn’t send bother you later in the evening when you want to relax and unwind? Unfinished tasks can weigh heavily on you.

Be a closer. Don’t start anything you don’t have the time, tenacity, or focus to finish, and you’ll find yourself far less cluttered and distracted. Just move that task out a day, or a week, or to whenever when you’ll have more time. You’ll probably do a much better job at it, too.

By adopting these six productive habits, I’ve found that my busiest days feel more productive and my largest goals feel within reach.

Want to know more about personal organization? Browse ourProductivity courses at lynda.com to explore all these ideas and more.

Microservices are currently getting a lot of attention: articles, blogs, discussions on social media, and conference presentations. They are rapidly heading towards the peak of inflated expectations on the Gartner Hype cycle. At the same time, there are skeptics in the software community who dismiss microservices as nothing new. Naysayers claim that the idea is just a rebranding of SOA. However, despite both the hype and the skepticism, the Microservice architecture pattern has significant benefits – especially when it comes to enabling the agile development and delivery of complex enterprise applications.

This blog post is the first in a 7-part series about designing, building, and deploying microservices. You will learn about the approach and how it compares to the more traditional Monolithic architecture pattern. This series will describe the various elements of the Microservice architecture. You will learn about the benefits and drawbacks of the Microservice architecture pattern, whether it makes sense for your project, and how to apply it.

[Editor: Further posts in this series are now available:

Let’s first look at why you should consider using microservices.

Building Monolithic Applications

Let’s imagine that you were starting to build a brand new taxi-hailing application intended to compete with Uber and Hailo. After some preliminary meetings and requirements gathering, you would create a new project either manually or by using a generator that comes with Rails, Spring Boot, Play, or Maven. This new application would have a modular hexagonal architecture, like in the following diagram:

Graph-01

At the core of the application is the business logic, which is implemented by modules that define services, domain objects, and events. Surrounding the core are adapters that interface with the external world. Examples of adapters include database access components, messaging components that produce and consume messages, and web components that either expose APIs or implement a UI.

Despite having a logically modular architecture, the application is packaged and deployed as a monolith. The actual format depends on the application’s language and framework. For example, many Java applications are packaged as WAR files and deployed on application servers such as Tomcat or Jetty. Other Java applications are packaged as self-contained executable JARs. Similarly, Rails and Node.js applications are packaged as a directory hierarchy.

Applications written in this style are extremely common. They are simple to develop since our IDEs and other tools are focused on building a single application. These kinds of applications are also simple to test. You can implement end-to-end testing by simply launching the application and testing the UI with Selenium. Monolithic applications are also simple to deploy. You just have to copy the packaged application to a server. You can also scale the application by running multiple copies behind a load balancer. In the early stages of the project it works well.

Marching Towards Monolithic Hell

Unfortunately, this simple approach has a huge limitation. Successful applications have a habit of growing over time and eventually becoming huge. During each sprint, your development team implements a few more stories, which, of course, means adding many lines of code. After a few years, your small, simple application will have grown into a monstrous monolith. To give an extreme example, I recently spoke to a developer who was writing a tool to analyze the dependencies between the thousands of JARs in their multi-million line of code (LOC) application. I’m sure it took the concerted effort of a large number of developers over many years to create such a beast.

Once your application has become a large, complex monolith, your development organization is probably in a world of pain. Any attempts at agile development and delivery will flounder. One major problem is that the application is overwhelmingly complex. It’s simply too large for any single developer to fully understand. As a result, fixing bugs and implementing new features correctly becomes difficult and time consuming. What’s more, this tends to be a downwards spiral. If the codebase is difficult to understand, then changes won’t be made correctly. You will end up with a monstrous, incomprehensiblebig ball of mud.

The sheer size of the application will also slow down development. The larger the application, the longer the start-up time is. For example, in a recent survey some developers reported start-up times as long as 12 minutes. I’ve also heard anecdotes of applications taking as long as 40 minutes to start up. If developers regularly have to restart the application server, then a large part of their day will be spent waiting around and their productivity will suffer.

Another problem with a large, complex monolithic application is that it is an obstacle to continuous deployment. Today, the state of the art for SaaS applications is to push changes into production many times a day. This is extremely difficult to do with a complex monolith since you must redeploy the entire application in order to update any one part of it. The lengthy start-up times that I mentioned earlier won’t help either. Also, since the impact of a change is usually not very well understood, it is likely that you have to do extensive manual testing. Consequently, continuous deployment is next to impossible to do.

Monolithic applications can also be difficult to scale when different modules have conflicting resource requirements. For example, one module might implement CPU-intensive image processing logic and would ideally be deployed in Amazon EC2 Compute Optimized instances. Another module might be an in-memory database and best suited for EC2 Memory-optimized instances. However, because these modules are deployed together you have to compromise on the choice of hardware.

Another problem with monolithic applications is reliability. Because all modules are running within the same process, a bug in any module, such as a memory leak, can potentially bring down the entire process. Moreover, since all instances of the application are identical, that bug will impact the availability of the entire application.

Last but not least, monolithic applications make it extremely difficult to adopt new frameworks and languages. For example, let’s imagine that you have 2 million lines of code written using the XYZ framework. It would be extremely expensive (in both time and cost) to rewrite the entire application to use the newer ABC framework, even if that framework was considerably better. As a result, there is a huge barrier to adopting new technologies. You are stuck with whatever technology choices you made at the start of the project.

To summarize: you have a successful business-critical application that has grown into a monstrous monolith that very few, if any, developers understand. It is written using obsolete, unproductive technology that makes hiring talented developers difficult. The application is difficult to scale and is unreliable. As a result, agile development and delivery of applications is impossible.

So what can you do about it?

Microservices – Tackling the Complexity

Many organizations, such as Amazon, eBay, and Netflix, have solved this problem by adopting what is now known as the Microservice architecture pattern. Instead of building a single monstrous, monolithic application, the idea is to split your application into set of smaller, interconnected services.

A service typically implements a set of distinct features or functionality, such as order management, customer management, etc. Each microservice is a mini-application that has its own hexagonal architecture consisting of business logic along with various adapters. Some microservices would expose an API that’s consumed by other microservices or by the application’s clients. Other microservices might implement a web UI. At runtime, each instance is often a cloud VM or a Docker container.

For example, a possible decomposition of the system described earlier is shown in the following diagram:

Graph-03

Each functional area of the application is now implemented by its own microservice. Moreover, the web application is split into a set of simpler web applications (such as one for passengers and one for drivers in our taxi-hailing example). This makes it easier to deploy distinct experiences for specific users, devices, or specialized use cases.

Each back-end service exposes a REST API and most services consume APIs provided by other services. For example, Driver Management uses the Notification server to tell an available driver about a potential trip. The UI services invoke the other services in order to render web pages. Services might also use asynchronous, message-based communication. Inter-service communication will be covered in more detail later in this series.

Some REST APIs are also exposed to the mobile apps used by the drivers and passengers. The apps don’t, however, have direct access to the back-end services. Instead, communication is mediated by an intermediary known as an API Gateway. The API Gateway is responsible for tasks such as load balancing, caching, access control, API metering, and monitoring, and can be implemented effectively using NGINX. Later articles in the series will cover the API Gateway.

Graph-05

The Microservice architecture pattern corresponds to the Y-axis scaling of the Scale Cube, which is a 3D model of scalability from the excellent book The Art of Scalability. The other two scaling axes are X-axis scaling, which consists of running multiple identical copies of the application behind a load balancer, and Z-axis scaling (or data partitioning), where an attribute of the request (for example, the primary key of a row or identity of a customer) is used to route the request to a particular server.

Applications typically use the three types of scaling together. Y-axis scaling decomposes the application into microservices as shown above in the first figure in this section. At runtime, X-axis scaling runs multiple instances of each service behind a load balancer for throughput and availability. Some applications might also use Z-axis scaling to partition the services. The following diagram shows how the Trip Management service might be deployed with Docker running on Amazon EC2.

Graph-02

At runtime, the Trip Management service consists of multiple service instances. Each service instance is a Docker container. In order to be highly available, the containers are running on multiple Cloud VMs. In front of the service instances is a load balancer such as NGINX that distributes requests across the instances. The load balancer might also handle other concerns such as caching, access control, API metering, and monitoring.

The Microservice architecture pattern significantly impacts the relationship between the application and the database. Rather than sharing a single database schema with other services, each service has its own database schema. On the one hand, this approach is at odds with the idea of an enterprise-wide data model. Also, it often results in duplication of some data. However, having a database schema per service is essential if you want to benefit from microservices, because it ensures loose coupling. The following diagram shows the database architecture for the example application.

Graph-04

Each of the services has its own database. Moreover, a service can use a type of database that is best suited to its needs, the so-called polyglot persistence architecture. For example, Driver Management, which finds drivers close to a potential passenger, must use a database that supports efficient geo-queries.

On the surface, the Microservice architecture pattern is similar to SOA. With both approaches, the architecture consists of a set of services. However, one way to think about the Microservice architecture pattern is that it’s SOA without the commercialization and perceived baggage of web service specifications (WS-*) and an Enterprise Service Bus (ESB). Microservice-based applications favor simpler, lightweight protocols such as REST, rather than WS-*. They also very much avoid using ESBs and instead implement ESB-like functionality in the microservices themselves. The Microservice architecture pattern also rejects other parts of SOA, such as the concept of a canonical schema.

The Benefits of Microservices

The Microservice architecture pattern has a number of important benefits. First, it tackles the problem of complexity. It decomposes what would otherwise be a monstrous monolithic application into a set of services. While the total amount of functionality is unchanged, the application has been broken up into manageable chunks or services. Each service has a well-defined boundary in the form of an RPC- or message-driven API. The Microservice architecture pattern enforces a level of modularity that in practice is extremely difficult to achieve with a monolithic code base. Consequently, individual services are much faster to develop, and much easier to understand and maintain.

Second, this architecture enables each service to be developed independently by a team that is focused on that service. The developers are free to choose whatever technologies make sense, provided that the service honors the API contract. Of course, most organizations would want to avoid complete anarchy and limit technology options. However, this freedom means that developers are no longer obligated to use the possibly obsolete technologies that existed at the start of a new project. When writing a new service, they have the option of using current technology. Moreover, since services are relatively small it becomes feasible to rewrite an old service using current technology.

Third, the Microservice architecture pattern enables each microservice to be deployed independently. Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservice architecture pattern makes continuous deployment possible.

Finally, the Microservice architecture pattern enables each service to be scaled independently. You can deploy just the number of instances of each service that satisfy its capacity and availability constraints. Moreover, you can use the hardware that best matches a service’s resource requirements. For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

The Drawbacks of Microservices

As Fred Brooks wrote almost 30 years ago, there are no silver bullets. Like every other technology, the Microservice architecture has drawbacks. One drawback is the name itself. The term microserviceplaces excessive emphasis on service size. In fact, there are some developers who advocate for building extremely fine-grained 10-100 LOC services. While small services are preferable, it’s important to remember that they are a means to an end and not the primary goal. The goal of microservices is to sufficiently decompose the application in order to facilitate agile application development and deployment.

Another major drawback of microservices is the complexity that arises from the fact that a microservices application is a distributed system. Developers need to choose and implement an inter-process communication mechanism based on either messaging or RPC. Moreover, they must also write code to handle partial failure since the destination of a request might be slow or unavailable. While none of this is rocket science, it’s much more complex than in a monolithic application where modules invoke one another via language-level method/procedure calls.

Another challenge with microservices is the partitioned database architecture. Business transactions that update multiple business entities are fairly common. These kinds of transactions are trivial to implement in a monolithic application because there is a single database. In a microservices-based application, however, you need to update multiple databases owned by different services. Using distributed transactions is usually not an option, and not only because of the CAP theorem. They simply are not supported by many of today’s highly scalable NoSQL databases and messaging brokers. You end up having to use an eventual consistency based approach, which is more challenging for developers.

Testing a microservices application is also much more complex. For example, with a modern framework such as Spring Boot it is trivial to write a test class that starts up a monolithic web application and tests its REST API. In contrast, a similar test class for a service would need to launch that service and any services that it depends upon (or at least configure stubs for those services). Once again, this is not rocket science but it’s important to not underestimate the complexity of doing this.

Another major challenge with the Microservice architecture pattern is implementing changes that span multiple services. For example, let’s imagine that you are implementing a story that requires changes to services A, B, and C, where A depends upon B and B depends upon C. In a monolithic application you could simply change the corresponding modules, integrate the changes, and deploy them in one go. In contrast, in a Microservice architecture pattern you need to carefully plan and coordinate the rollout of changes to each of the services. For example, you would need to update service C, followed by service B, and then finally service A. Fortunately, most changes typically impact only one service and multi-service changes that require coordination are relatively rare.

Deploying a microservices-based application is also much more complex. A monolithic application is simply deployed on a set of identical servers behind a traditional load balancer. Each application instance is configured with the locations (host and ports) of infrastructure services such as the database and a message broker. In contrast, a microservice application typically consists of a large number of services. For example, Hailo has 160 different services and Netflix has over 600 according to Adrian Cockcroft. Each service will have multiple runtime instances. That’s many more moving parts that need to be configured, deployed, scaled, and monitored. In addition, you will also need to implement a service discovery mechanism (discussed in a later post) that enables a service to discover the locations (hosts and ports) of any other services it needs to communicate with. Traditional trouble ticket-based and manual approaches to operations cannot scale to this level of complexity. Consequently, successfully deploying a microservices application requires greater control of deployment methods by developers, and a high level of automation.

One approach to automation is to use an off-the-shelf PaaS such as Cloud Foundry. A PaaS provides developers with an easy way to deploy and manage their microservices. It insulates them from concerns such as procuring and configuring IT resources. At the same time, the systems and network professionals who configure the PaaS can ensure compliance with best practices and with company policies. Another way to automate the deployment of microservices is to develop what is essentially your own PaaS. One typical starting point is to use a clustering solution, such as Mesos or Kubernetes in conjunction with a technology such as Docker. Later in this series we will look at how software-based application delivery approaches like NGINX, which easily handles caching, access control, API metering, and monitoring at the microservice level, can help solve this problem.

Summary

Building complex applications is inherently difficult. A Monolithic architecture only makes sense for simple, lightweight applications. You will end up in a world of pain if you use it for complex applications. The Microservice architecture pattern is the better choice for complex, evolving applications despite the drawbacks and implementation challenges.

In later blog posts, I’ll dive into the details of various aspects of the Microservice architecture pattern and discuss topics such as service discovery, service deployment options, and strategies for refactoring a monolithic application into services.

Stay tuned…

[Editor: Further posts in this 7-part series are now available:

Guest blogger Chris Richardson is the founder of the original CloudFoundry.com, an early Java PaaS (Platform-as-a-Service) for Amazon EC2. He now consults with organizations to improve how they develop and deploy applications. He also blogs regularly about microservices at http://microservices.io.

 

Source: NGINX