Lessons learned from building Microservices – Part 3: Patterns, Architecture, Implementation And Testing

Introduction

In this blog post I will go over things I’ve learned when working with microservices. I will cover things to do and things that you should not do. There won’t be alot of code here, mostly theory and ideas.

I will discuss most of the topic from a smaller development teams point of view. Some of these things are applicable for larger teams and projects but not all necessarily. It all depends on your own project and needs.

  • Architecture
  • Sharing functionality (Common code base)
  • Continuous Integration / Continuous Delivery (CI/CD)
  • High Availability
  • Messaging and Event driven architecture
  • Security
  • Templates and scripting
  • Logging and Monitoring (+metrics)
  • Configuration pattern
  • Exception handling and Errors
  • Performance and Testing

General advice

Generally I advice to use and consider existing technologies, products and solution with your application and architecture to speed up your development and keep the amount of custom code at a minimum.

This will allow you to save on errors, save on time and money.

Still, make sure that you choose technologies and products that support your and your clients solutions; not things that you think are “cool” right now or would be fun to use. Your choices should fit the needs and requirements on not only your project but the whole of the architecture and the future vision of your project.

Architecture

To keep things simple I would say there are two ways to approach microservices.

Approach one: Starting big but small

The first approach is the one you probably are familiar with. These are big projects by big companies like Amazon, Netflix, Google, Uber etc.

Usually this involves creating hundreds or even thousands of microservice on multiple platforms and technologies. This usually require large teams of people both developing, deploying and up keeping the microservice solution they are working on.

This approach is definitely not for everyone; it requires alot of people, resources and money.

So this is why I recommend approach number two.

Approach two: Starting small but plan big

Most likely you have a team of a few people and limited resources. In this case I recommend starting between microservice architecture and monolith one.

By this I mean that you start the process by designing it and implementing all the infrastructure of a microservice but do not start splitting you application into microservices from the start. Create one microservice, expand it then split it when things start to grow so that it feels like a new microservice is needed.

By this time you have had time to understand and analyze your business domain. Now you have an idea what kind of a communication between microservices you need; perhaps HTTP based or decoupled messaging based.

When you are creating you microservice keep you design pattern for you microservices simple. Do not implement overly complicated patterns to impress anyone, including yourself. It will make the upkeep of you microservices a hell. You want to keep the code as simple and as common inside a microservice and between them.

Share as much as possible between microservices.

Create good Continuous Integration and Continuous Deployment procedures as soon as possible. It will save you time.

Verify you have proper availability and scalability based on your application needs.

Prefer scripting to automate and speed up development and upkeep.

Use templates everywhere you can, especially for creating new microservices.

Have a common way to do exception handling and logging.

Have a good configuration plan when deploying your Microservices.

You also need team member who are not afraid to do many different things with multiple technologies. You need people who can learn anything, adapt and develop with any tool, tech or language.

With these approaches and check list you should be able to manage a microservice architecture with only a handful of people. For up-keeping even one person is enough, but constant development at least two or three would be a good amount.

Sharing functionality (Common code base)

When it comes to code I prefer the “golden” rule in programming to not repeat myself but with microservices you will end up with duplication.

The wise thing to do with microservices is to know when to not duplicate and have a common access to shareable code and functionality; and why do this?:

  • Developer and up doing similar code that is used again and again in multiple microservices.
  • These common pieces of code and functionality end up having the same king of problems and bug which have to be corrected in every place
  • The problems and bugs cause security issues
  • With performance issues
  • With possible hard to understand code even when the logic and functionality is the same but the code ends of being slightly or vastly different.
  • And lastly all of the above combined cause you to spend time and money that you may not have

The next question to ask is:

What should you share? The main rule is that is must be common. The code should not be specific to a certain microservice or a domain.

Again it all depends on your project but here are a few things:

  • Logging, I highly recommend this to a unified logging output format that is easy to index and analyze.
  • Access Logs
  • Security
    • User authorization but not authentication or registration. Registration is better suited as an external microservice as it’s own domain.
    • Encryption
    • JSON Web Token related security code and processing
    • API Key
    • Basic Auth
  • Metrics
  • HTTP Client class for HTTP requests. Create an instance of this class with different parameters to share common functionality for logging, metrics and security.
  • Code to use and access Cloud based resources from AWS, Azure
    • CloudWatch
    • SQL Database
    • AppInsights
    • SQS
    • ServiceBus
    • Redis
    • etc…
  • Email Client
  • Web Related Base classes like for controllers
  • Validations and rules
  • Exception and error handling
  • Metrics logic
  • Configuration and settings logic

How should you distribute your shared functionality and code? Well it all depends on your project but here are a few ways:

  • One library to rule them all :D. Create one library which all projects need to use. Notice: This might become a problem later on when your common library code amount grows. You will end up with functionality that you may not need in a particular Microservice.
  • Create multiple libraries that are used on need basis. Thus using only bits of functionality which you need.
  • Create Web API’s or similar services that you request them to perform a certain action. This might work for things like logging or caching but not for all functionality. Also notice that you will lose speed of code to latency if you outsource your common functionality to a common service that is run independently from your actual code that needs that common functionality.
  • A combination of all of the above.

Dependency injection

Use your preferred dependency injection library to manage your classes and dependencies.

When using your DI I recommend thinking of combining classes into “packages” of functionality by feature, domain, logic, data source etc. By doing this you can target specific parts of your code without “contaminating” the project with unneeded code even if you have a large library like a common library.

For example you could pack a set of classes that provide you the functionality to communicate with a CRM, get, modify and add data.

This would include your model classes. A CRM client class, some logic that operate on the model to clean them up etc.

Once you identify them, make your code that that you can add them into your project with the least amount of code.

Also consider creating a logic to automatically tell a developer which configurations are missing once a set of functionalities are added. The easiest way to achieve this is to add this at compile and/or runtime with checks.

See my previous article on this matter for a more detailed description:

https://lionadi.wordpress.com/2019/10/01/spring-boot-bean-management-and-speeding-development/

Continuous Integration/Continuous Delivery (CI/CD)

There are may ways of doing CI and CD but the main point is that ADD it and automate as much as possible. This is especially important with microservices and small team sizes.

It will speed things up and keep things working.

Here are a few things to take into consideration:

  1. Create unit tests that are run during your pipelines
  2. Create API or Service level tests that verify that things work with mock or real life data. You can do this by mocking external dependencies or using them for real if available.
  3. Add performance tests and stability tests to your pipelines if possible to verify that things run smoothly.
  4. Think of using the same tool for creating your API or service tests when developing and when running the same tests in a pipeline. You can just reuse the same tests and be sure that what you test manually is the same that should work in production. For example: https://www.postman.com/ and https://github.com/postmanlabs/newman
  5. Script as much as possible and parametrize your scripts for reuse. Identify which scripts can be used and shared to avoid doing things twice.
  6. Use semantic versioning https://semver.org/
  7. Have a deployment plan on how you are going to use branches to deploy to different environments (https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow), for example:
    1. You can use release branches to deploy to different environments based on pipeline steps
    2. Or have specific branches for specific environment. Once things are merged into them certain things start to happen
  8. Use automated build, test and deployment for dev environment once things are merged to your development branch.
  9. Use manual steps for other environments deployment, this is to avoid testers to receiving bad builds in QA or production crashing on bugs not caught up.
  10. If you do decide to automate everything all the way to production, make sure you have good safe guards that things don’t blow up.
  11. And lastly; nothing is eternal. Experiment and re-iterate often and especially if you notice problems.

Common tools to CI/CD

https://www.sonatype.com/product-nexus-repository

https://www.ansible.com/

https://www.rudder.io/

https://www.saltstack.com/

https://puppet.com/try-puppet/puppet-enterprise/

https://cfengine.com/

https://about.gitlab.com/

https://www.jenkins.io/

https://codenvy.com/

https://www.postman.com/

https://www.sonarqube.org/

High Availability

The main point in high availability is that your solution will continue to work as well as possible or as normal even if some parts of it fail.

Here are the three main points:

  • Redundancy—ensuring that any elements critical to system operations will have an additional, redundant components that can take over in case of failure.
  • Monitoring—collecting data from a running system and detecting when a component fails or stops responding.
  • Failover—a mechanism that can switch automatically from the currently active component to a redundant component, if monitoring shows a failure of the active component.

Technical components enabling high availability

  • Data backup and recovery—a system that automatically backs up data to a secondary location, and recovers back to the source.
  • Load balancing—a load balancer manages traffic, routing it between more than one system that can serve that traffic.
  • Clustering—a cluster contains several nodes that serve a similar purpose, and users typically access and view the entire cluster as one unit. Each node in the cluster can potentially failover to another node if failure occurs. By setting up replication within the cluster, you can create redundancy between cluster nodes.

Things that help in high availability

  • Make your application stateless
  • Use messaging/events to ensure that business critical functionality is performed at some point in time. This is especially true to any write, update or delete operations.
  • Avoid heavy coupling between services, if possible and if you have to do use a lightweight messaging system. The most troublesome aspect of communicating between microservices is going to be over HTTP.
  • Have good health checks that are fast to respond when requested. This can be divided into two categories:
    • Liveness: Checks if the microservice is alive, that is, if it’s able to accept requests and respond.
    • Readiness: Checks if the microservice’s dependencies (Database, queue services, etc.) are themselves ready, so the microservice can do what it’s supposed to do.
  • Use “circuit breaker” to quickly kill unresponsive services and quickly run them back up
  • Make sure that you have enough physical resources(CPU, Memory, disk space etc) to run your solution and your architecture
  • Make sure you have enough request threads supported in your web server and web application
  • Make sure you verify how sizable HTTP requests your web server and application is allowed to receive, the header is usually that will fail your application.
  • Test your solution broadly in stress and load balancing tests to identity problems. Attach a profiler during these tests to see how your application perform, what bottlenecks there are in your code, what hogs resources etc.
  • Keep your microservices image sizes to the minimum for optimal run in production and optimal deployment. Don’t add things that you don’t use, it will slow your application down and deployment will suffer; all of this will lead to more needed physical resources and more money needed.

Messaging and Event driven architecture

I will be covering this topic in an upcoming post but until then here are a few pointers.

Because of the nature of Microservices; that they can be quickly scaled up and down based on needs I would very highly recommend that you use messaging for business critical operations and logic.

The most important ones I would say are: Writing, Updating and Deleting data.

Also all long running operations I would recommend using messaging.

Notice: One of the most important thing I consider is that you log and monitor the success of messages being send, processed and finished, with trailing to the original request to connect logs and metrics together to get a whole picture when troubleshooting.

I have coveted this in my previous logging post: https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

Security

Generally security is an important aspect of any application and has many different topics and details to cover.

Related on security I covered this extensively in my last post in this series on Microservices, go check it out: https://lionadi.wordpress.com/2020/03/23/lessons-learned-from-building-microservices-part-2-security/

Templates and scripting

To speed up development, keep things the same and thus avoiding duplicate errors + unnecessary fixes, use templates where possible. This is especially true for Microservices.

What are possible templates that you could have:

  • Templates for deploying Cloud resources like ARM for Azure or Cloudformation for AWS.
  • Beckend Application templates
  • Front application templates
  • CI/CD templates
  • Kubernetes templates
  • and so on…

Anything that you know you will end up having multiple copies is good to standardize and create templates.

Also I recommend that for applications (front or backend), it is a very good practice to have the applications up and running as soon as you duplicate them from your repository. They should be able to be up and running as soon as you start them.

Script as much as possible and make the scripts reusable.

Parametrize all of the variables you can in your scrips and templates

Here are a few things you would need for a backend application template:

  • Security such as authentication and authorization.
  • Logging and metrics
  • Configuration and settings logic
  • Access Logs
  • Exception handling and errors
  • Validations

Logging and Monitoring (+metrics)

Again as with security this is a large topic, I’ve also written about this in my previous post in the series and recommend go checking it out:

https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

Configurations pattern

For microservice configurations I recommend a following pattern where your deployment environments (DEV, QA, PROD etc) configuration files configurations/settings values are left empty. You still have the configuration/setting in your configuration/settings files but you leave them empty.

Next you need to make sure that your code knows how to report empty configuration values when your application is started. You can achieve this by creating a common way to retrieve configurations/settings value and being able to analyze which of the needed and loaded configurations are present.

This way when your docker image is started and the application inside the image starts running and retrieving configurations, you should be able to see what is missing in your environment.

This is mostly because you don’t want to have your environment specific configurations set in your git repository, especially the secrets. You will end up setting these values in your actual QA, PROD etc environments through a mechanism. If you forget to add a setting/configuration in your mechanism your docker image may crash and you will end up searching for the problem a long time, even with proper logging it may not be immediately clear.

I’ve written an previous post on this matter which opens things up on the code level:

https://lionadi.wordpress.com/2019/10/01/spring-boot-bean-management-and-speeding-development/

Exception handling and Errors

Three main points with exceptions and errors:

  • Global exception handling
  • Make sure you do not “leak” exceptions to your clients
  • Use a standardized error response
  • Log things properly
  • And take into consideration security issues with errors

Again for for details on logging and security check my previous posts:

https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

https://lionadi.wordpress.com/2020/03/23/lessons-learned-from-building-microservices-part-2-security/

For error responses, you have two choices:

  1. Make up your own
  2. Or use an existing system

I would say avoid making your own if possible but it all depends on your application and architecture.

Consider first existing ones for a reference:

https://www.hl7.org/fhir/operationoutcome.html

https://developers.google.com/search-ads/v2/standard-error-responses

https://developers.facebook.com/docs/graph-api/using-graph-api/error-handling/

Still here is also an official standard which you can use and may be supported by your preferred framework or library: https://www.rfc-editor.org/rfc/rfc7807.html

The RFC 7807 specifies the following for error responses and details from https://www.rfc-editor.org/rfc/rfc7807.html:

  • Error responses MUST use standard HTTP status codes in the 400 or 500 range to detail the general category of error.
  • Error responses will be of the Content-Type application/problem, appending a serialization format of either json or xml: application/problem+json, application/problem+xml.
  • Error responses will have each of the following keys(Internet Engineering Task Force (IETF)):
    • detail (string) – A human-readable description of the specific error.
    • type (string) – a URL to a document describing the error condition (optional, and “about:blank” is assumed if none is provided; should resolve to a human-readable document).
    • title (string) – A short, human-readable title for the general error type; the title should not change for given types.
    • status (number) – Conveying the HTTP status code; this is so that all information is in one place, but also to correct for changes in the status code due to the usage of proxy servers. The status member, if present, is only advisory as generators MUST use the same status code in the actual HTTP response to assure that generic HTTP software that does not understand this format still behaves correctly.
    • instance (string) – This optional key may be present, with a unique URI for the specific error; this will often point to an error log for that specific response.

RFC 7807 example error response:

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

{
  "type": "https://example.com/invalid-account",
  "title": "Your account is invalid.",
  "detail": "Your account is invalid, your account is not confirmed.",
  "instance": "/account/34122323/data/abc",
  "balance": 30,
  "accounts": ["/account/34122323", "/account/8786875"]
}
   HTTP/1.1 400 Bad Request
   Content-Type: application/problem+json
   Content-Language: en

   {
   "type": "https://example.net/validation-error",
   "title": "Your request parameters didn't validate.",
   "invalid-params": [ {
                         "name": "age",
                         "reason": "must be a positive integer"
                       },
                       {
                         "name": "color",
                         "reason": "must be 'green', 'red' or 'blue'"}
                     ]
   }

Performance and Testing

Testing

To make sure that your solution and architecture works and performs I recommend doing extensive testing. Familiarize yourself with the testing pyramid which hold the following test procedures:

  • Units tests:
    • Small units of code tests which tests preferably one specific thing in your code
    • The tests makes sure things work as intended
    • The number of unit tests will outnumber all or tests
    • Your unit tests should run very fast
    • Mock things used in your tested functionality: replace a real thing with a fake version
    • Stub things; set up test data that is then returned and tests are verified against
    • You end up leaving out external dependencies for better isolation and faster tests.
    • Test structure:
      • Set up the test data
      • Call your method under test
      • Assert that the expected results are returned
  • Integration tests:
    • Here you test your code with external dependencies
    • Replace your real life dependencies with test doubles that perform and return same kind of data
    • You can run them locally by spinning them up using technologies like docker images
    • Your can run them as part of your pipeline by creating and starting a specific cluster that hold test double instances
    • Example database integration test:
      • start a database
      • connect your application to the database
      • trigger a function within your code that writes data to the database
      • check that the expected data has been written to the database by reading the data from the database
    • Example REST API test:
      • start your application
      • start an instance of the separate service (or a test double with the same interface)
      • trigger a function within your code that reads from the separate service’s API
      • check that your application can parse the response correctly
  • Contract tests
    • Tests that verify how two separate entities communicate and function with each other based on a commonly predefined contract (provider/publisher and consumer/subscriber. Common communications between entities:
      • REST and JSON via HTTPS
      • RPC using something like gRPC
      • building an event-driven architecture using queues
    • Your tests should cover both the publisher and the consumer logic and data
  • UI Tests:
    • UI tests test that the user interface of your application works correctly.
    • User input should trigger the right actions, data should be presented to the user
    • The UI state should change as expected.
    • UI Tests does not need to be performed end-to-end; the backend could be stubbed
  • End-to-End testing:
    • These tests are covering the whole spectrum of your application, UI, to backend, to database/external services etc.
    • These tests verify that your applications work as intended; you can use tools such as Selenium with the WebDriver Protocol.
    • Problems with end-to-end tests
      • End-to-end tests require alot of maintenance; even the slightest change somewhere will affect the end result in the UI.
      • Failure is common and may be unclear why
      • Browser issues
      • Timing issues
      • Animation issues
      • Popup dialogs
      • Performance and long wait times for a test to be verified; long run times
    • Consider keeping end-to-end to the bare minimum due to the problems described above; test the main and most critical functionalities
  • Acceptance testing:
    • Making sure that your application works correctly from a user’s perspective, not just from a technical perspective.
    • These tests should describe what the users sees, experiences and gets as an end result.
    • Usually done through the user interface
  • Exploratory testing:
    • Manual testing by human beings that try to find out creative ways to destroy the application or unexpected ways an end user might use the application which might cause problems.
    • After these finding you can automate these things down the testing pyramid line in units tests, or integration or UI.

All of the automated tests can be integrated to you integration and deployment pipeline and you should consider to do so to as many of the automated tests as possible.

Performance

For performance tests the only good way to get an idea of your solutions and architectures performance is to break it and see how it works under long sustained duration.

Two test types are good for this:

  • Stress testing: Trying to break things by scaling the load up constantly untill your application stop totally working. Then you analyze your finding based on logs, metrics, test tool results etc.
  • Load testing: A sustained test where you keep on making the same requests as you would expect in real life to get an idea how things work in the long run; these tests can go on from a few hours to a few days.

The main idea is that you see problems in your code like:

  • Memory leaks
  • CPU spikes
  • Resource hogging pieces of code
  • Slow pieces of code
  • Network problems
  • External dependencies problem
  • etc

One of my favorite tool for this is JMeter https://jmeter.apache.org/.

And to get the most out of these tests I recommend attaching a code profiler to your solutions and see what happens during these tests.

There is a HUGE difference how your code behaves when you manually test your code under a profiles and how it behaves when thousands or millions or requests are performed. Some problems only become evident when they are called thousands of times, especially memory allocations and releases.

And lastly; cover at least the most important and critical sections of your solutions and keep adding new ones when possible or problems areas are discovered.

These tests can also be added as part of the pipelines.

Topology tests

Do your performances tests while simulating possible error in your architecture or down times.

  • Simulate slow start times for servers.
  • Simulate slow response times from servers.
  • Simulate servers going down; specific or randomly

Test how your system works under missing resources and problems.

Test how expensive your system is

When you are creating tests consider testing the financial impact of your overall system and architecture. By doing different levels of load tests and stress tests you should be able to get a view on what kind of costs you will end up with.

This is especially important with cloud resources where what you pay is related to what to consume.

Good to know!?: .NET – Accessing, Querying and Manipulating data with Entity Framework

Hi,

I gathered some links and resources on data manipulation with the .NET Framework. Hope this helps and works as a reference card what is available:

ADO.NET http://msdn.microsoft.com/en-us/library/e80y5yhx(v=vs.110).aspx
Entity Framework http://msdn.microsoft.com/en-US/data/ef
Configuring Parameters and Parameter Data Types http://msdn.microsoft.com/en-us/library/yy6y35y8(v=vs.110).aspx
.NET Framework Data Providers http://msdn.microsoft.com/en-us/library/a6cd7c08(v=vs.110).aspx
DataSet Class http://msdn.microsoft.com/en-us/library/system.data.dataset.aspx
Retrieving Data Using a DataReader http://msdn.microsoft.com/en-us/library/haa3afyz(v=vs.110).aspx
Entity Framework – Database First http://msdn.microsoft.com/en-us/data/jj206878.aspx
Entity Framework- Code First to a New Database http://msdn.microsoft.com/en-us/data/jj193542.aspx
ADO.NET Entity Data Model Designer http://msdn.microsoft.com/en-us/library/vstudio/cc716685(v=vs.100).aspx
Entity Data Model Wizard http://msdn.microsoft.com/en-us/library/vstudio/bb399247(v=vs.100).aspx
Create Database Wizard (Master Data Services Configuration Manager) http://technet.microsoft.com/en-us/library/ee633799.aspx
Update Model Wizard (Entity Data Model Tools) http://msdn.microsoft.com/en-us/library/vstudio/cc716705(v=vs.100).aspx
Using the DbContext API http://msdn.microsoft.com/en-us/data/gg192989.aspx
Table-per-Type vs Table-per-Hierarchy Inheritance http://blog.devart.com/table-per-type-vs-table-per-hierarchy-inheritance.html
Relational database management system http://en.wikipedia.org/wiki/Relational_database_management_system
ObjectContext Class http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext(v=vs.110).aspx
DBContext vs ObjectContexts http://www.entityframeworktutorial.net/EntityFramework4.3/dbcontext-vs-objectcontext.aspx
DbContext Class http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext(v=vs.113).aspx
ObjectContext management http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.contextoptions(v=vs.110).aspx
How to: Use Lazy Loading to Load Related Objects http://msdn.microsoft.com/en-us/library/vstudio/dd456846(v=vs.100).aspx
EntityObject Class http://msdn.microsoft.com/en-us/library/system.data.objects.dataclasses.entityobject(v=vs.110).aspx
EdmEntityTypeAttribute Class http://msdn.microsoft.com/en-us/library/system.data.objects.dataclasses.edmentitytypeattribute(v=vs.110).aspx
SerializableAttribute Class http://msdn.microsoft.com/en-us/library/system.serializableattribute.aspx
DataContractAttribute Class http://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute.aspx
Entity Framework (EF) Documentation http://msdn.microsoft.com/en-us/data/ee712907.aspx
OData protocol http://www.odata.org/
Open Data Protocol by Example http://msdn.microsoft.com/en-us/library/ff478141.aspx
WCF Data Services Overview http://msdn.microsoft.com/en-us/library/cc668794(v=vs.110).aspx
Using the REST Interface http://msdn.microsoft.com/en-us/library/ff798339.aspx
Understanding Service-Oriented Architecture http://msdn.microsoft.com/en-us/library/aa480021.aspx
WCF Data Services 4.5 http://msdn.microsoft.com/en-us/library/cc668792(v=vs.110).aspx
Advanced using OData in .NET: WCF Data Services http://www.codeproject.com/Articles/135490/Advanced-using-OData-in-NET-WCF-Data-Services
ObjectCache Class http://msdn.microsoft.com/en-us/library/vstudio/system.runtime.caching.objectcache
HttpContext.Cache Property http://msdn.microsoft.com/en-us/library/system.web.httpcontext.cache(v=vs.110).aspx
ASP.NET Application State Overview http://msdn.microsoft.com/en-us/library/ms178594.aspx
ASP.NET Session State Overview http://msdn.microsoft.com/en-us/library/ms178581.aspx
Understanding ASP.NET View State http://msdn.microsoft.com/en-us/library/ms972976.aspx
CacheItemPolicy Class http://msdn.microsoft.com/en-us/library/system.runtime.caching.cacheitempolicy(v=vs.110).aspx
CacheItemPriority Enumeration http://msdn.microsoft.com/en-us/library/system.web.caching.cacheitempriority.aspx
ChangeMonitor Class http://msdn.microsoft.com/en-us/library/system.runtime.caching.changemonitor(v=vs.110).aspx
CacheDependency Class http://msdn.microsoft.com/en-us/library/system.web.caching.cachedependency.aspx
System.Transactions Namespace http://msdn.microsoft.com/en-us/library/system.transactions.aspx
EntityTransaction Class http://msdn.microsoft.com/en-us/library/system.data.entityclient.entitytransaction.aspx
EntityCommand Class http://msdn.microsoft.com/en-us/library/system.data.entityclient.entitycommand.aspx
EntityConnection Class http://msdn.microsoft.com/en-us/library/system.data.entityclient.entityconnection(v=vs.110).aspx
SqlTransaction Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.aspx
System.Data.EntityClient Namespace http://msdn.microsoft.com/en-us/library/system.data.entityclient(v=vs.110).aspx
IsolationLevel Enumeration http://msdn.microsoft.com/en-us/library/system.data.isolationlevel.aspx
TransactionScope Class http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx
System.Xml Namespaces http://msdn.microsoft.com/en-us/library/gg145036(v=vs.110).aspx
XmlWriter Class http://msdn.microsoft.com/en-us/library/system.xml.xmlwriter.aspx
XML Documents and Data http://msdn.microsoft.com/en-us/library/2bcctyt8(v=vs.110).aspx
XmlDocument Class http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.aspx
XmlReader Class http://msdn.microsoft.com/en-us/library/vstudio/system.xml.xmlreader
XPath Examples http://msdn.microsoft.com/en-us/library/ms256086(v=vs.110).aspx
LINQ to XML [from BPUEDev11] http://msdn.microsoft.com/en-us/library/bb387098.aspx
LINQ to XML Overview http://msdn.microsoft.com/en-us/library/bb387061.aspx
XElement Class http://msdn.microsoft.com/en-us/library/system.xml.linq.xelement.aspx
LINQ (Language-Integrated Query) http://msdn.microsoft.com/en-us/library/bb397926.aspx
DbContext.SaveChanges Method http://msdn.microsoft.com/en-us/library/system.data.entity.dbcontext.savechanges(v=vs.113).aspx
DbContext.Set<TEntity> Method http://msdn.microsoft.com/en-us/library/gg696521(v=vs.113).aspx
Object-relational impedance mismatch http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch
Loading Related Entities (Eager Loading,  Lazy Loading, Explicitly Loading) http://msdn.microsoft.com/en-us/data/jj574232.aspx
Demystifying Entity Framework Strategies: Loading Related Data (Eager Loading,  Lazy Loading, Explicitly Loading) http://msdn.microsoft.com/en-us/magazine/hh205756.aspx
Precompiling LINQ Queries http://msdn.microsoft.com/en-us/magazine/ee336024.aspx
Entity Framework 5: Controlling automatic query compilation http://blogs.msdn.com/b/stuartleeks/archive/2012/06/12/entity-framework-5-controlling-automatic-query-compilation.aspx
Improve Performance with Entity Framework 5 http://devproconnections.com/entity-framework/improve-performance-entity-framework-5
Queries in LINQ to Entities http://msdn.microsoft.com/en-us/library/vstudio/bb399367(v=vs.100).aspx
LINQ to Entities: Basic Concepts and Features http://www.codeproject.com/Articles/246861/LINQ-to-Entities-Basic-Concepts-and-Features
LINQ to Objects http://msdn.microsoft.com/en-us/library/bb397919.aspx
SqlConnectionStringBuilder Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnectionstringbuilder.aspx
ObjectQuery<T> Class http://msdn.microsoft.com/en-us/library/bb345303(v=vs.110).aspx
ObjectQuery Class http://msdn.microsoft.com/en-us/library/system.data.objects.objectquery(v=vs.110).aspx
ObjectQuery.ToTraceString Method http://msdn.microsoft.com/en-us/library/system.data.objects.objectquery.totracestring(v=vs.110).aspx
System.Data.SqlClient Namespace http://msdn.microsoft.com/en-us/library/System.Data.SqlClient(v=vs.110).aspx
SqlConnection Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.aspx
SQL Server Connection Pooling (ADO.NET) http://msdn.microsoft.com/en-us/library/vstudio/8xx3tyca%28v%3Dvs.100%29
DataTable Class http://msdn.microsoft.com/en-us/library/system.data.datatable.aspx
DataSet Class http://msdn.microsoft.com/en-us/library/system.data.dataset.aspx
DataAdapter Class http://msdn.microsoft.com/en-us/library/system.data.common.dataadapter.aspx
SqlCommand Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.aspx
SqlCommand.CommandText Property http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtext(v=vs.110).aspx
CommandType Enumeration http://msdn.microsoft.com/en-us/library/system.data.commandtype(v=vs.110).aspx
SqlDataAdapter Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldataadapter.aspx
SqlCommand.ExecuteScalar Method http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar(v=vs.110).aspx
SqlCommand.ExecuteReader Method http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executereader(v=vs.110).aspx
SqlDataReader Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.aspx
SqlDataReader.Read Method http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader.read.aspx
DbDataAdapter.Fill Method (DataSet) http://msdn.microsoft.com/en-us/library/zxkb3c3d(v=vs.110).aspx
DbDataAdapter.Update Method (DataSet) http://msdn.microsoft.com/en-us/library/at8a576f(v=vs.110).aspx
DataAdapter.AcceptChangesDuringFill Property http://msdn.microsoft.com/en-us/library/system.data.common.dataadapter.acceptchangesduringfill(v=vs.110).aspx
Working with Datasets in Visual Studio http://msdn.microsoft.com/en-us/library/8bw9ksd6%28v%3Dvs.110%29.aspx
SqlParameter Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.aspx
EF Designer TPT Inheritance http://msdn.microsoft.com/en-us/data/jj618293.aspx
Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data http://msdn.microsoft.com/en-us/library/dd793152.ASPX
Code First to an Existing Database http://msdn.microsoft.com/en-us/data/jj200620.aspx
Model-First in the Entity Framework 4 http://msdn.microsoft.com/en-us/data/ff830362.aspx
ADO.NET Entity Data Model Designer http://msdn.microsoft.com/en-us/library/vstudio/cc716685(v=vs.100).aspx
The ADO.NET Entity Framework Overview http://msdn.microsoft.com/en-us/library/aa697427(v=vs.80).aspx
ADO.NET Entity Data Model Tools http://msdn.microsoft.com/en-us/library/vstudio/bb399249(v=vs.100).aspx
Plain Old CLR Object(POCO) http://en.wikipedia.org/wiki/Plain_Old_CLR_Object
Working with POCO Entities http://msdn.microsoft.com/en-us/library/vstudio/dd456853(v=vs.100).aspx

Good to know!? C# 5.0 Key Features Reference – Part 2

Hi,

This are copy notes from the main key-points found in the MS 70-483 prep book. This might be useful to someone. A checklist of things in C#.

Also check out the exam link and the actual book:

http://www.microsoft.com/learning/en-us/exam-70-483.aspx

Exam Ref 70-483: Programming in C#

Implement multithreading and asynchronous processing
Using multiple threads can improve responsiveness and enables you to make use of multiple processors.
The Thread class can be used if you want to create your own threads explicitly. Otherwise, you can use the ThreadPool to queue work and let the runtime handle things.
A Task object encapsulates a job that needs to be executed. Tasks are the recommended way to create multithreaded code.
The Parallel class can be used to run code in parallel.
PLINQ is an extension to LINQ to run queries in parallel.
The new async and await operators can be used to write asynchronous code more easily.
Concurrent collections can be used to safely work with data in a multithreaded (concurrent access) environment.
Manage multithreading
When accessing shared data in a multithreaded environment, you need to synchronize
access to avoid errors or corrupted data.
 Use the lock statement on a private object to synchronize access to a piece of code.
 You can use the Interlocked class to execute simple atomic operations.
 You can cancel tasks by using the CancellationTokenSource class with a
CancellationToken.
Implement program flow
 Boolean expressions can use several operators: ==, !=, <, >, <=, >=, !. Those operators
can be combined together by using AND (&&), OR (||) and XOR (^).
 You can use the if-else statement to execute code depending on a specific condition.
 The switch statement can be used when matching a value against a couple of options.
56 Chapter 1 Manage program flow
The for loop can be used when iterating over a collection where you know the number of iterations in advance.
A while loop can be used to execute some code while a condition is true; do-while should be used when the code should be executed at least once.
foreach can be used to iterate over collections.
Jump statements such as break, goto, and continue can be used to transfer control to another line of the program.
Create and implement events and callbacks method
 Delegates can be instantiated, passed around, and invoked.
 Lambda expressions, also known as anonymous methods, use the => operator and
form a compact way of creating inline methods.
 Events are a layer of syntactic sugar on top of delegates to easily implement the
publish-subscribe pattern.
 Events can be raised only from the declaring class. Users of events can only remove
and add methods the invocation list.
 You can customize events by adding a custom event accessor and by directly using the
underlying delegate type.
Implement exception handling
 In the .NET Framework, you should use exceptions to report errors instead of error
codes.
 Exceptions are objects that contain data about the reason for the exception.
 You can use a try block with one or more catch blocks to handle different types of
exceptions.
 You can use a finally block to specify code that should always run after, whether or not
an exception occurred.
 You can use the throw keyword to raise an exception.
 You can define your own custom exceptions when you are sure that users of your code
will handle it in a different way. Otherwise, you should use the standard .NET Framework
exceptions.
Create types
 Types in C# can be a value or a reference type.
 Generic types use a type parameter to make the code more flexible.
 Constructors, methods, properties, fields, and indexer properties can be used to create
a type.
 Optional and named parameters can be used when creating and calling methods.
 Overloading methods enable a method to accept different parameters.
 Extension methods can be used to add new functionality to an existing type.
 Overriding enables you to redefine functionality from a base class in a derived class.
Consume types
 Boxing occurs when a value type is treated as a reference type.
When converting between types, you can have an implicit or an explicit conversion.
An explicit conversion is called casting and requires special syntax.
You can create your own implicit and explicit user-defined conversions.
The .NET Framework offers several helper methods for converting types.
The dynamic keyword can be used to ease the static typing of C# and to improve
interoperability with other languages.
Enforce encapsulation
Encapsulation is important in object-oriented software. It hides internal details and
improves the usability of a type.
Data can be encapsulated with a property.
Properties can have both a get and a set accessor that can run additional code, commonly
known as getters and setters.
Types and type elements can have access modifiers to restrict accessibility.
The access modifiers are public, internal, protected, protected, internal, and private.
Explicit interface implementation can be used to hide information or to implement
interfaces with duplicate member signatures.
Create and implement a class hierarchy
Inheritance is the process in which a class is derived from another class or from an interface.
An interface specifies the public elements that a type must implement.
A class can implement multiple interfaces.
A base class can mark methods as virtual; a derived class can then override those methods to add or replace behavior.
A class can be marked as abstract so it can’t be instantiated and can function only as a base class.
A class can be marked as sealed so it can’t be inherited.
The .NET Framework offers default interfaces such as IComparable, IEnumerable, IDisposable and IUnknown.
Find, execute, and create types at runtime by using reflection
A C# assembly stores both code and metadata.
Attributes are a type of metadata that can be applied in code and queried at runtime.
Reflection is the process of inspecting the metadata of a C# application.
Through reflection you can create types, call methods, read properties, and so forth.
The CodeDOM can be used to create a compilation unit at runtime. It can be compiled or converted to a source file.
Expression trees describe a piece of code. They can be translated to something else (for example, SQL) or they can be compiled and executed.
Manage the object life cycle
 Memory in C# consists of both the stack and the heap.
 The heap is managed by the garbage collector.
 The garbage collector frees any memory that is not referenced any more.
 A finalizer is a special piece of code that’s run by the garbage collector when it removes
an object.
 IDisposable can be implemented to free any unmanaged resources in a deterministic
way.
 Objects implementing IDisposable can be used with a using statement to make sure
they are always freed.
 A WeakReference can be used to maintain a reference to items that can be garbage
collected when necessary.
Manipulate strings
A string is an immutable reference type.
When doing a lot of string manipulations, you should use a StringBuilder.
The String class offers a lot of methods for dealing with strings like IndexOf, LastIndexOf, StartsWith, EndsWith, and Substring.
Strings can be enumerated as a collection of characters.
Formatting is the process of displaying an object as a string.
You can use format strings to change how an object is converted to a string.
You can implement formatting for your own types.
Validate application input
 Validating application input is important to protect your application against both
mistakes and attacks.
 Data integrity should be managed both by your application and your data store.
 The Parse, TryParse, and Convert functions can be used to convert between types.
 Regular expressions, or regex, can be used to match input against a specified pattern
or replace specified characters with other values.
 When receiving JSON and XML files, it’s important to validate them using the built-in
types, such as with JavaScriptSerializer and XML Schemas.
Perform symmetric and asymmetric
encryption
 An asymmetric algorithm uses a public and private key that are mathematically linked.
 Hashing is the process of converting a large amount of data to a smaller hash code.
 Digital certificates can be used to verify the authenticity of an author.
 CAS are used to restrict the resources and operations an application can access and
execute.
 System.Security.SecureString can be used to keep sensitive string data in memory.
Manage assemblies
An assembly is a compiled unit of code that contains metadata.
An assembly can be strongly signed to make sure that no one can tamper with the content.
Signed assemblies can be put in the GAC.
An assembly can be versioned, and applications will use the assembly version they were developed with. It’s possible to use configuration files to change these bindings.
A WinMD assembly is a special type of assembly that is used by WinRT to map non-native languages to the native WinRT components.
Debug an application
Visual Studio build configurations can be used to configure the compiler.
 A debug build outputs a nonoptimized version of the code that contains extra instructions
to help debugging.
 A release build outputs optimized code that can be deployed to a production
environment.
 Compiler directives can be used to give extra instructions to the compiler. You can use
them, for example, to include code only in certain build configurations or to suppress
certain warnings.
 A program database (PDB) file contains extra information that is required when debugging
an application.
Implement diagnostics in an application
 Logging and tracing are important to monitor an application that is in production and
should be implemented right from the start.
 You can use the Debug and TraceSource classes to log and trace messages. By configuring
different listeners, you can configure your application to know which data to send
where.
 When you are experiencing performance problems, you can profile your application to
find the root cause and fix it.
 Performance counters can be used to constantly monitor the health of your applications.
Perform I/O operations
 You can work with drives by using Drive and DriveInfo.
 For folders, you can use Directory and DirectoryInfo.
 File and FileInfo offer methods to work with files.
 The static Path class can help you in creating and parsing file paths.
 Streams are an abstract way of working with a series of bytes.
 There are many Stream implementations for dealing with files, network operations,
and any other types of I/O.
 Remember that the file system can be accessed and changed by multiple users at the
same time. You need to keep this in mind when creating reliable applications.
 When performing network requests, you can use the WebRequest and WebResponse
classes from the System.Net namespace.
 Asynchronous I/O can help you create a better user experience and a more scalable
application.
Consume data
 ADO.NET uses a provider model that enables you to connect to different types of
databases.
 You use a DbConnection object to create a connection to a database.
 You can execute queries that create, update, read, and delete (CRUD) data from a
database.
 When creating queries it’s important to use parameterized queries so you avoid SQL
injection.
Objective 4.3: Query and manipulate data and objects by using LINQ CHAPTER 4 291
You can consume a web service from your application by creating a proxy for it.
You can work with XML by using the XmlReader, XmlWriter, XPathNavigator, and XmlDocument classes.
Query and manipulate data and objects by using LINQ
 LINQ, which stands for Language Integrated Query, is a uniform way of writing queries
against multiple data sources.
 Important language features when working with LINQ queries are implicit typing, object
initialization syntax, lambdas, extension methods, and anonymous types.
 You can use LINQ with a method-based syntax and the query syntax.
 LINQ queries are deferred-execution, which means that the query executes when it is
first iterated.
 You can use LINQ to XML to query, create, and update XML.
Serialize and deserialize data
 Serialization is the process of transforming an object to a flat file or a series of bytes.
 Deserialization takes a series of bytes or a flat file and transforms it into an object.
 XML serialization can be done by using the XmlSerializer.
 You can use special attributes to configure the XmlSerializer.
 Binary serialization can be done by using the BinaryFormatter class.
 WCF uses another type of serialization that is performed by the DataContractSerializer.
 JSON is a compact text format that can be created by using the DataContractJsonSerializer.
Store data in and retrieve data from collections
 The .NET Framework offers both generic and nongeneric collections. When possible,
you should use the generic version.
 Array is the most basic type to store a number of items. It has a fixed size.
 List is a collection that can grow when needed. It’s the most-used collection.
 Dictionary stores and accesses items using key/value pairs.
 HashSet stores unique items and offers set operations that can be used on them.
 A Queue is a first-in, first-out (FIFO) collection.
 A Stack is a first-in, last-out (FILO) collection.
 You can create a custom collection by inheriting from a collection class or inheriting
from one of the collection interfaces.