06 Feb, 2018 - Continuing with Microservices pt. 2

Service Discovery

Two types of Service Discovery patterns: Client-side Discovery & Server-side Discovery.  Both patterns work with a Service Registry.

For Client-Side, the client contacts the Service Registry and then picks (randomly, or via other algorithm) one of the available server resources.  For Server-Side the client contacts one end point (a load balancer for example), and the end point directs traffic to available resources registered with the Service Registry.

The Service Registry needs an up to date list of healthy resources, which can be maintained by API endpoints for registration, deregistration, and health checks.

Pros for Client-Side: No other architectural items (besides the Service Registry)
Cons for Client-Side: Each client has to maintain the service discovery logic, and selection.

Pros for Server-Side: Only one end point for clients, discovery is abstracted away from clients
Cons for Server-Side: Load Balancer becomes one more element in your architecture.

 

Event Driven Data Management

Microservices should each have their own backend data store.  Sharing the database would couple different Microservices together, data should only be accessed via the respective APIs.  This also allows each Microservice to choose the best data store for their needs, whether it be a relational DB, NoSQL DB, or some other source.

Since you have data responsibilities it’s not plausible to have atomic transactions between services.  Enter Event Driven Data management.  When an event occurs in one service, it then publishes a message to a message broker, which the other Microservices can subscribe to.  This can cause a chain of tasks to run, and workflows to be processed.  These are not ACID transactions however.

Messages should be architected to ensure at least (or exactly) once processing.  Making sure that the processors are idempotent helps with repeated messages.

The downside to EDD is that is is more complicated to develop with than ACID transactions.  You have to worry about data consistency.  Operations that query multiple services, or update data to multiple services.

The end of the article describes an interesting paradigm shift for achieving atomicity.  Instead of storing an object with it’s state, you store the events to that object (creation, updating, etc) and then subscribe to those events from the other services.

 

Another couple good articles. It’s one thing to read about, and another to implement though. Seems like AWS helps a lot with Server-Side registry in their ELBs.

The Event Drive Data Management was a great read and a lot of thought provoking ideas.  It had a nice little refresher on database ACID principles as well.

12 Dec, 2017 - Continuing with Microservices pt 1.

Continuing on with the NGINX Microservices articles

API Gateway

An API gateway is a single entry point to the system.  It prevents clients and other services from having to call to each Microservice directly.  The downside to direct communication comes when you want to refactor a service. It couples your clients or services directly to each other.

API Gateway is similar to a Facade, where the access of multiple interfaces are combined into one layer.  The gateway can provide one endpoint that aggregates the results of multiple endpoints.

Multiple API Gateways can be used for specific client types.

Some downsides to a gateway are that it’s another element in your architecture that needs to be maintained, developed, and deployed.

API Gateway can handle other requirements such as caching or authentication.  It should deal with errors appropriately.

Inter-Process Communication

IPCs can be categorized in different dimension such as 1-1 or 1-many, and Synchronous vs Asynchronous.

Each service will have an API (not necessarily HTTP) that other services can communicate to it through.  API changes must be carefully thought out as it’s unrealistic to update every client using that API.  Some changes will only affect the underlying implementation, which should be fine for communcation, but changes that affect the Interfaces can break older clients.  Considering supporting multiple versions of your API until you can deprecate older, non-used versions.

Always use network timeouts and call limits for IPC.  Fail gracefully.

Using messaging rather than HTTP / REST can provide some benefits.  Pub/sub models & queues can decouple clients from servers.  They can provide a solution for storing messages if subscribers are overwhelmed, or taken offline.

 

Lots of great ideas in these articles.  I actually really like the break down of RESTful APIs, and the different levels of maturity according to Leonard Richardson.

Four more articles to get through, then on to a tutorial from AWS.

11 Dec, 2017 - Introduction to Microservices

Went to AWS Re:Invent last week in Las Vegas.  A lot of great material that I was able to soak up.  Don’t use AWS too much in my job, but there were a lot of topics covered there that I think are very relevant to my career path.

First of those being Microservices.  Most (if not all) of the applications & websites I’ve built or worked on have been monolith applications.  Microservices seems like an interesting architecture pattern to split up, and decouple your services.  Not having a lot of experience with SOA, or ESBs, this was sort of a jump in my education.

I started off with an article series on nginx.com called “Introduction to Microservices“.  It had a lot of great information that I want to summarize here:

  • Monoliths can have greater start up and deployment times as opposed to smaller services
  • Microservices allow you to deploy only the services that changed
  • Microservices allow you separate and pick the best resources for deployment.  Eg., images processor on a GPU optimized instance, DB on a memory optimized instance
  • Monoliths can suffer from a single point of failure.  If a bug brings down the API, all of your backend is now unavailable.
  • Adapting new technologies is harder.  Easier to port a smaller service than a giant application
  • Microservices can communicate through agnostic interfaces, eg., RESTful APIs.
  • Use a separate DB for each service to ensure loose coupling.  Can result in duplication of data
  • Monoliths have the advantage of being one app, one deploy.  Direct method calls eliminate the need for managed messaging between services.
  • Changes in multiple services with dependencies requires coordinated builds, can be tricky.

Those are just the notes from the first article.  I have a lot of questions on Microservices before reading the remaining six articles in the series.  It seems necessary for applications that reach a certain size, but what if your team works on multiple unrelated applications where the entire application can be maintained by one person.  Is it worth the overhead to move to Microservices?  Loved the first article and the comments at the bottom though.  Lots to think about for a beginner.

17 May, 2017 - Application Performance Management / Monitoring (APM)

Reading about APM on stackify today

  • https://stackify.com/what-is-apm/
  • https://stackify.com/mistakes-implementing-application-performance-monitoring-solutions/

We don’t do much / any of this currently.  Thought it would be good to at least look into and see what options we have.

Can parse access log to determine how long requests take, but APM can go deeper to analyze what is causing performance spikes.  Similar to profiling.

Notable features

  • Profiling of entire app / system from multiple levels (high level – entire web request / low level – to the method / line / language)
  • Detailed traces of web requests allow you to examine production issues after they’ve happened, instead of trying to recreate after the fact
  • Resource metrics (CPU, disk space, memory)
  • Logging management
  • RUM (Real user monitoring), front end metrics, page loads, etc.

Stackify starts at 10$ / month (for what though?)

Extra information comes with the resource trade off.  Is profiling using up resources / time?

17 May, 2017 - .NET Core / .NET Framework

.NET Core is Microsoft’s newest lightweight, cross-platform version of their .NET Framework

“.NET Core is a blazing fast, lightweight and modular platform for creating web applications and services that run on Windows, Linux and Mac.”

Version 2.0 was just released.  I’ve only developed against the .NET Framework (latest version is v4.7) using ASP.NET MVC, and WebAPI2.  Core merges those two technologies together into one space.

It will take some time for me to learn the differences between the two, and whether to use one over the other.

.NET Core is gaining a lot of attention because of it’s open-source / cross-platform standardization, but from what I’ve read it’s not as mature / doesn’t support all the .NET Framework libraries that are currently available.

This is the base of everything though if I want to be a modern .NET architect.

13 Aug, 2014 - The Pseudocode Programming Process

Chapter 9, in Code Complete 2 by Steve McConnell details the Pseudocode Programming Process (PPP).

The idea is that you write a routine in plain English (Pseudocode) to abstract away from the actual code.  You then refine the pseudocode into your language of choice.

One of the supposed benefits is that the original pseudocode can become your comments.  This to me seemed like a great benefit, until I thought about it a bit more.  Some / most code statements don’t need a comment if the statement is clear enough to convey what’s going on.  That’s good code.  Of course, some statements need clarification with comments, which is why commenting is a good idea.  Leaving the pseudocode in as comments clutters up your code.  Although, near the end of the chapter he does advocate for “Remov[ing] redundant comments”.

Also, as the routine evolves, you’re no longer working in pseudocode really, unless you keep another version of it separate from your code, meaning when you change anything, you also have to change the pseudocode.  I think the DRY (Don’t repeat yourself) principal can apply here.

Jeff Atwood said it way better than I could.

That being said, I enjoyed the chapter, and definitely see the perks of trying PPP out sometime in the future.  This blog post from David Zych is a nicely write up with example.

Valid supplements / alternatives to PPP are listed in the chapter as being Test-First / Test Driven Development, Design by Contract.

Side Note: The last phase in PPP is checking the code, which includes the step “compile the code”  one way the book shows its age, as quite a few modern IDEs will compile while you type / save.

12 Aug, 2014 - More Studying

Studying, Studying, Studying.

A new work project may have me working with a PHP CMS, that I haven’t worked too much with before.  Once it ramps up a lot of my time will be spent looking into that.

Currently, I’m finding that the more subjects I study, the more they lead to other areas that I’d like to put some time into, and they all branch out and I find I’m overwhelmed with a list of topics I want to study, without actually getting into any of them.  I have a large list on Evernote, tracking all the topics I come across, and it’s getting larger rather than smaller.

I’ve committed myself to concentrating on no more than 2 books at once.  Right now, I’m going through Code Complete 2, by Steve McConnell, and Code: The Hidden Language of Computer Hardware and Software by Charles Petzold.  The former book is a tome of knowledge, published in 2004, it concentrates on the “construction” of code, with chapters topics such as “Defensive Programming”, “Fundamental Data Types”, and “Refactoring”.  It’s going to take me quite a while to get through it all, but the few chapters I’ve read already have been great.

The latter book is a very thorough breakdown of how the modern computer works, and how it came to be that way.  It can get very technical, describing the various types of gates used in processors, and how they are combined to make more complex elements.  Despite its depth, it’s very easy to read and get into.  I find myself wanted to read it whenever I have some free time.

Hopefully, I’ll progress with those two books in good time, so I can move on to more great resources, but for now, I’m focusing my attention on those two.

05 Aug, 2014 - Studying

I’ve stalled playing with Perl for a while.  I finished reading the Learning Perl 6th edition, and it was a good read.  I did all the sample problems, and got through most of them without issue.  I used Perl in a few work projects, and made something neat for a friend with it.  I plan on making a more detailed post later about it, but basically, it converts your iPhone chat history into navigable webpages.  A friend of mine and I had over 14000 texts between the two of us, and searching past conversations was a huge hassle, so I made a little Perl tool to allow for an easier search experience.

The rest of my time I’ve spent studying up on previous topics I haven’t touched seriously in a while.  Searching, sorting, bit manipulation, data structures.  A lot of things that I studied in computer science, but don’t necessarily get to use every day.

11 Mar, 2014 - More TPP and Perl

Finished reading the main sections of The Pragmatic Programmer.  The fifteen year old book shows its age in some spots, but mostly is great advice.  I was pleased to see that a lot of the practices the book encourages, are already being implemented at the place I work at, and other places I’ve interned.   Specifically, using source control, commenting your code in a sensible manner (document why, not how, the code tells you how), and automated builds and test suites.

It’s a great reference book to have with you whenever you’re starting a new project.  The thing I’m mostly going to take away from it is to think about what I’m trying to solve.  I see a lot of instances in my job where more automation would help.  Developer environment set up, data conversion, and some other areas could all use a bit of automation.  I do recognize that somethings would be harder to automate, than to just do it manually.  Some deployment procedures would be difficult to automate as required permissions are missing.  Sometimes you have to work with what you’ve got, but that doesn’t mean you shouldn’t look for improvements.

I’ve been slowly going through this Perl book: Learning Perl 6th Edition.  I’m about 4 chapters in, and it’s been a good resource so far.  Each chapter has exercises at the end to work through, which I’ve enjoyed.  Some of them require user input from the console, and I’ve found that Notepad++ is not the best environment to be running the programs from, as there doesn’t seem to be a way to end the program without killing it completely.  Specifically, the <STDIN> command for inputting an array of values.  Normally, you would send it the EOF command (ctrl-D, or ctrl-Z, depending on your OS of choice), but I was unable to find a way to do that within Notepad++.  I’ve switched to Sublime Text as my IDE.  I haven’t found out if it supports <STDIN> for arrays, but I like the look of it.  I think I’m going to have to learn to run the programs from the command prompt.

13 Feb, 2014 - Beginning with Perl – Perl for XML transformation

I’ve been reading through TPP, and the second chapter deals with the tools that a good programmer should have access to.  I never really thought about it, but text manipulation is a big part of the job.  In the past I have written small projects in Java to create test data, or transform one data set to another, but the development time and overhead is quite high.  The book mentions Perl, Python and Ruby as better text manipulators, and I’ve decided to try using Perl.  I’ve done a bit in Python before, but nothing more really, than a Hello World program.

Reading the book has exposed me to some glaring holes in my knowledge, but also inspired me to look for better solutions to problems.  There are some manual processes I’ve been doing at work, that I don’t think I would have considered automating until I started reading tech books.  It’s ignited my passion for writing code.

So, I downloaded Strawberry Perl for Windows, and set up Notepad++ for executing Perl scripts and wrote my first Hello World program.  I’m going to try some XML transformation as my first real program.  I don’t know if it’s the best tool for the job (XSLT, Python, etc., could all be better choices), but I guess I’ll find out.