Visual Studio 2017 Productivity Guide

 

http://aka.ms/vs2017guide

A guide from Microsoft for keyboard shortcuts and other productivity tools in Visual Studio 2017.

Ctrl-F12 & Shift-F12

I use F12 a lot to “Go to Definition”, but I always forget these short cuts for “Go to Implementation” and “Show All references”, two context menu items I click on regularly

Ctrl-T – Go to All

I was used to Eclipse’s shortcut for navigating to a type or file (Ctrl-T / Ctrl-R if I remember correctly), but I never found the same functionality in VS work as well.  I’ll have to try to remember to use this sometimes.

Refactoring

There are some useful refactorings that I always seem to do manually.  VS 2017 has utilities for various refactors including Null Checks, Extract Method, Generate Method / Constructor and some others.

Live Unit Testing

Once I get myself and my team more Testing focused, we should try out VS 2017’s Live Unit Testing feature.

 

A few neat items in the guide that could help my productivity.  I think the Live Unit Testing and the Refactor helpers would benefit me the most at this time.

Continuing with Microservices pt. 3

Last two articles from that series on Microservices:

Choosing a Deployment Strategy

There are multiple patterns for deployment.  Monolith applications are usually deployed in their entirety to multiple hosts, but that does not have to be the case for Microservices.

The Mutliple Services per Host pattern is exactly what it sounds like, you deploy an instance of multiple services to the same host / vm.  Whether they be in the same container / web server or running as separate processes.  Some benefits is that you can share resources of the underlying server / host.  Deployment can be fast and easy.  Some draw backs are that you are coupling services together by their resources.  One service could hog all the resources, starving the other services.

An alternative pattern is One Service per Host. Each deployed service instance has it’s own host / server and set of resources. There are two flavours of this, Service Instance per VM & Service Instance per container.  (And deploying items to the cloud in serverless offerings gives us another flavour with Azure WebApps or AWS Beanstalk or FaaS).  Service Instance per VM uses VM images to package up the VM which is then instantiated as many times as you need.  Packaging your services into VMs gives you a reusable template that you can deploy over and over, which can also take advantage of other modern cloud benefits such as load balancing and auto scaling.  Deploying becomes reliable.  Downsides are that you have the overhead of an entire VM per service instance, including the OS.  That’s where per container patterns can help, as they package a smaller footprint that can be deployed with fewer resources. Some downsides to Containers are the newness of the technology.  It’s rapidily becoming more mature, but they’ve only been around for a couple of years. If you don’t use a managed container service, you are also responsible for the container infrastructure and underlying OS / VM.

Refactoring a Monolith Application

When refactoring a monolith application to Microservices, it’s best to break off pieces of the application into services gradually, rather than rewriting the entire application at once.  You can shrink the functionality in the monolith by running Microservices along side, shifting more and more features to services until the monolith disappears, or becomes one of many Microservices.

When you have a monolith and Microservices you will need both a router (to route incoming traffic to the appropriate code) and some glue code to get both working together.

An initial split of your Monolith can naturally happen between the different layers, specifically the presentation and the business & data access layers.

When you begin to break things up further, start small, and start with the low hanging fruit to get more familiar with Microservices and it’s quirks.  Modules and functionality that change frequency are also good candidates for breaking off early, as it frees you from having to build and deploy the whole app for the frequent changes.

 

That’s the end of the Microservices article series.  I feel like I got a good glimpse into the elements that a Microservices architecture encompass. I wish that I had a code base to work with / tutor to help guide me through a real world example, but I feel more confident in having discussions about this architecture type.  I think a lot of the projects I have at work right now are a bit too small to consider converting (although, we do have semblances of Microservices in some sites).  Glad I read through them all.

Continuing with Microservices pt. 2

Service Discovery

Two types of Service Discovery patterns: Client-side Discovery & Server-side Discovery.  Both patterns work with a Service Registry.

For Client-Side, the client contacts the Service Registry and then picks (randomly, or via other algorithm) one of the available server resources.  For Server-Side the client contacts one end point (a load balancer for example), and the end point directs traffic to available resources registered with the Service Registry.

The Service Registry needs an up to date list of healthy resources, which can be maintained by API endpoints for registration, deregistration, and health checks.

Pros for Client-Side: No other architectural items (besides the Service Registry)
Cons for Client-Side: Each client has to maintain the service discovery logic, and selection.

Pros for Server-Side: Only one end point for clients, discovery is abstracted away from clients
Cons for Server-Side: Load Balancer becomes one more element in your architecture.

 

Event Driven Data Management

Microservices should each have their own backend data store.  Sharing the database would couple different Microservices together, data should only be accessed via the respective APIs.  This also allows each Microservice to choose the best data store for their needs, whether it be a relational DB, NoSQL DB, or some other source.

Since you have data responsibilities it’s not plausible to have atomic transactions between services.  Enter Event Driven Data management.  When an event occurs in one service, it then publishes a message to a message broker, which the other Microservices can subscribe to.  This can cause a chain of tasks to run, and workflows to be processed.  These are not ACID transactions however.

Messages should be architected to ensure at least (or exactly) once processing.  Making sure that the processors are idempotent helps with repeated messages.

The downside to EDD is that is is more complicated to develop with than ACID transactions.  You have to worry about data consistency.  Operations that query multiple services, or update data to multiple services.

The end of the article describes an interesting paradigm shift for achieving atomicity.  Instead of storing an object with it’s state, you store the events to that object (creation, updating, etc) and then subscribe to those events from the other services.

 

Another couple good articles. It’s one thing to read about, and another to implement though. Seems like AWS helps a lot with Server-Side registry in their ELBs.

The Event Drive Data Management was a great read and a lot of thought provoking ideas.  It had a nice little refresher on database ACID principles as well.

Continuing with Microservices pt 1.

Continuing on with the NGINX Microservices articles

API Gateway

An API gateway is a single entry point to the system.  It prevents clients and other services from having to call to each Microservice directly.  The downside to direct communication comes when you want to refactor a service. It couples your clients or services directly to each other.

API Gateway is similar to a Facade, where the access of multiple interfaces are combined into one layer.  The gateway can provide one endpoint that aggregates the results of multiple endpoints.

Multiple API Gateways can be used for specific client types.

Some downsides to a gateway are that it’s another element in your architecture that needs to be maintained, developed, and deployed.

API Gateway can handle other requirements such as caching or authentication.  It should deal with errors appropriately.

Inter-Process Communication

IPCs can be categorized in different dimension such as 1-1 or 1-many, and Synchronous vs Asynchronous.

Each service will have an API (not necessarily HTTP) that other services can communicate to it through.  API changes must be carefully thought out as it’s unrealistic to update every client using that API.  Some changes will only affect the underlying implementation, which should be fine for communcation, but changes that affect the Interfaces can break older clients.  Considering supporting multiple versions of your API until you can deprecate older, non-used versions.

Always use network timeouts and call limits for IPC.  Fail gracefully.

Using messaging rather than HTTP / REST can provide some benefits.  Pub/sub models & queues can decouple clients from servers.  They can provide a solution for storing messages if subscribers are overwhelmed, or taken offline.

 

Lots of great ideas in these articles.  I actually really like the break down of RESTful APIs, and the different levels of maturity according to Leonard Richardson.

Four more articles to get through, then on to a tutorial from AWS.

Introduction to Microservices

Went to AWS Re:Invent last week in Las Vegas.  A lot of great material that I was able to soak up.  Don’t use AWS too much in my job, but there were a lot of topics covered there that I think are very relevant to my career path.

First of those being Microservices.  Most (if not all) of the applications & websites I’ve built or worked on have been monolith applications.  Microservices seems like an interesting architecture pattern to split up, and decouple your services.  Not having a lot of experience with SOA, or ESBs, this was sort of a jump in my education.

I started off with an article series on nginx.com called “Introduction to Microservices“.  It had a lot of great information that I want to summarize here:

  • Monoliths can have greater start up and deployment times as opposed to smaller services
  • Microservices allow you to deploy only the services that changed
  • Microservices allow you separate and pick the best resources for deployment.  Eg., images processor on a GPU optimized instance, DB on a memory optimized instance
  • Monoliths can suffer from a single point of failure.  If a bug brings down the API, all of your backend is now unavailable.
  • Adapting new technologies is harder.  Easier to port a smaller service than a giant application
  • Microservices can communicate through agnostic interfaces, eg., RESTful APIs.
  • Use a separate DB for each service to ensure loose coupling.  Can result in duplication of data
  • Monoliths have the advantage of being one app, one deploy.  Direct method calls eliminate the need for managed messaging between services.
  • Changes in multiple services with dependencies requires coordinated builds, can be tricky.

Those are just the notes from the first article.  I have a lot of questions on Microservices before reading the remaining six articles in the series.  It seems necessary for applications that reach a certain size, but what if your team works on multiple unrelated applications where the entire application can be maintained by one person.  Is it worth the overhead to move to Microservices?  Loved the first article and the comments at the bottom though.  Lots to think about for a beginner.

Application Performance Management / Monitoring (APM)

Reading about APM on stackify today

  • https://stackify.com/what-is-apm/
  • https://stackify.com/mistakes-implementing-application-performance-monitoring-solutions/

We don’t do much / any of this currently.  Thought it would be good to at least look into and see what options we have.

Can parse access log to determine how long requests take, but APM can go deeper to analyze what is causing performance spikes.  Similar to profiling.

Notable features

  • Profiling of entire app / system from multiple levels (high level – entire web request / low level – to the method / line / language)
  • Detailed traces of web requests allow you to examine production issues after they’ve happened, instead of trying to recreate after the fact
  • Resource metrics (CPU, disk space, memory)
  • Logging management
  • RUM (Real user monitoring), front end metrics, page loads, etc.

Stackify starts at 10$ / month (for what though?)

Extra information comes with the resource trade off.  Is profiling using up resources / time?

.NET Core / .NET Framework

.NET Core is Microsoft’s newest lightweight, cross-platform version of their .NET Framework

“.NET Core is a blazing fast, lightweight and modular platform for creating web applications and services that run on Windows, Linux and Mac.”

Version 2.0 was just released.  I’ve only developed against the .NET Framework (latest version is v4.7) using ASP.NET MVC, and WebAPI2.  Core merges those two technologies together into one space.

It will take some time for me to learn the differences between the two, and whether to use one over the other.

.NET Core is gaining a lot of attention because of it’s open-source / cross-platform standardization, but from what I’ve read it’s not as mature / doesn’t support all the .NET Framework libraries that are currently available.

This is the base of everything though if I want to be a modern .NET architect.

The Pseudocode Programming Process

Chapter 9, in Code Complete 2 by Steve McConnell details the Pseudocode Programming Process (PPP).

The idea is that you write a routine in plain English (Pseudocode) to abstract away from the actual code.  You then refine the pseudocode into your language of choice.

One of the supposed benefits is that the original pseudocode can become your comments.  This to me seemed like a great benefit, until I thought about it a bit more.  Some / most code statements don’t need a comment if the statement is clear enough to convey what’s going on.  That’s good code.  Of course, some statements need clarification with comments, which is why commenting is a good idea.  Leaving the pseudocode in as comments clutters up your code.  Although, near the end of the chapter he does advocate for “Remov[ing] redundant comments”.

Also, as the routine evolves, you’re no longer working in pseudocode really, unless you keep another version of it separate from your code, meaning when you change anything, you also have to change the pseudocode.  I think the DRY (Don’t repeat yourself) principal can apply here.

Jeff Atwood said it way better than I could.

That being said, I enjoyed the chapter, and definitely see the perks of trying PPP out sometime in the future.  This blog post from David Zych is a nicely write up with example.

Valid supplements / alternatives to PPP are listed in the chapter as being Test-First / Test Driven Development, Design by Contract.

Side Note: The last phase in PPP is checking the code, which includes the step “compile the code”  one way the book shows its age, as quite a few modern IDEs will compile while you type / save.

More Studying

Studying, Studying, Studying.

A new work project may have me working with a PHP CMS, that I haven’t worked too much with before.  Once it ramps up a lot of my time will be spent looking into that.

Currently, I’m finding that the more subjects I study, the more they lead to other areas that I’d like to put some time into, and they all branch out and I find I’m overwhelmed with a list of topics I want to study, without actually getting into any of them.  I have a large list on Evernote, tracking all the topics I come across, and it’s getting larger rather than smaller.

I’ve committed myself to concentrating on no more than 2 books at once.  Right now, I’m going through Code Complete 2, by Steve McConnell, and Code: The Hidden Language of Computer Hardware and Software by Charles Petzold.  The former book is a tome of knowledge, published in 2004, it concentrates on the “construction” of code, with chapters topics such as “Defensive Programming”, “Fundamental Data Types”, and “Refactoring”.  It’s going to take me quite a while to get through it all, but the few chapters I’ve read already have been great.

The latter book is a very thorough breakdown of how the modern computer works, and how it came to be that way.  It can get very technical, describing the various types of gates used in processors, and how they are combined to make more complex elements.  Despite its depth, it’s very easy to read and get into.  I find myself wanted to read it whenever I have some free time.

Hopefully, I’ll progress with those two books in good time, so I can move on to more great resources, but for now, I’m focusing my attention on those two.

Studying

I’ve stalled playing with Perl for a while.  I finished reading the Learning Perl 6th edition, and it was a good read.  I did all the sample problems, and got through most of them without issue.  I used Perl in a few work projects, and made something neat for a friend with it.  I plan on making a more detailed post later about it, but basically, it converts your iPhone chat history into navigable webpages.  A friend of mine and I had over 14000 texts between the two of us, and searching past conversations was a huge hassle, so I made a little Perl tool to allow for an easier search experience.

The rest of my time I’ve spent studying up on previous topics I haven’t touched seriously in a while.  Searching, sorting, bit manipulation, data structures.  A lot of things that I studied in computer science, but don’t necessarily get to use every day.