Friday, January 17

Microservices Training | Microservices Docker Example | Microservices Tu...

Micro-services are self-contained, independent application units that each fulfill only one specific business function, so they can be considered small applications in their own right. What will happen if you decide to build several micro-service with different technology stacks? Your team will soon be in trouble as developers have to manage even more environments than they would with a traditional monolithic application.
The solution is: using micro-service and containers to encapsulate each micro-service. Docker is a tool that helps you manage those containers.

I am sharing a free youtube tutorial that we used to train our team developing micro-service.

https://youtu.be/UWl7X2fUWTM?t=21

Another tutorial on Microservices Docker, Kubernetes  




Sunday, January 12

Database design for Microservices

History of Microservices - 

A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that many of them had been recently exploring.In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those ideas as a case study in March 2012 at 33rd Degree in Kraków in Micro services - Java, the Unix Way as did Fred George about the same time. Adrian Cockcroft, former director for the Cloud Systems at Netflix, described this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article - Joe Walnes, Dan North, Evan Bottcher and Graham Tackley. <Source : Wikipedia>

Accepted Definition of Microservices - 

Microservices are a software development technique - a variant of the service-oriented architecture (SOA) structural style— that arranges an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.



 Database design for Microservices

The main idea behind microservices architecture is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. The main benefit of the microservices architecture is that it improves agility and reduced development time.When you  correctly decompose a system into microservices, you can develop and deploy each microservice independently and in parallel with the other services.


In order to be able to independently develop microservices , they must be loosely coupled. Each microservice’s persistent data must be private to that service and only accessible via it’s API . If two or more microservices were to share persistent data then you need to carefully coordinate changes to the data’s schema, which would slow down development.
There are a few different ways to keep a service’s persistent data private. You do not need to provision a database server for each service. For example,  if you are using a relational database then the options are:
  • Private-tables-per-service – each service owns a set of tables that must only be accessed by that service
  • Schema-per-service – each service has a database schema that’s private to that service
  • Database-server-per-service – each service has it’s own database server.
Private-tables-per-service and schema-per-service have the lowest overhead.  Using a schema per service is ideal since it makes ownership clearer. For some applications, it might make sense for database intensive services to have their own database server.
It is a good idea to create barriers that enforce this modularity. You could, for example, assign a different database user id to each service and use a database access control mechanism. Without some kind of barrier to enforce encapsulation, developers will always be tempted to bypass a service’s API and access it’s data directly.It might also make sense to have a polyglot persistence architecture. For each service you choose the type of database that is best suited to that service’s requirements. For example, a service that does text searches could use ElasticSearch. A service that manipulates a social graph could use Neo4j. It might not make sense to use a relational database for every service.
There are some downsides to keeping a service’s persistent data private. Most notably, it can be challenging to implement business transactions that update data owned by multiple services. Rather than using distributed transaction, you typically must use an eventually consistent, event-driven approach to maintain database consistency.
Another problem, is that it is difficult to implement some queries because you can’t do database joins across the data owned by multiple services. Sometimes, you can join the data within a service. In other situations, you will need to use Command Query Responsibility Segregation (CQRS) and maintain denormalizes views.
Another challenge is that  services sometimes need to share data. For example, let’s imagine that several services need access to user profile data. One option is to encapsulate the user profile data with a service, that’s then called by other services. Another option is to use an event-driven mechanism to replicate data to each service that needs it.
In summary,  it is important that each service’s persistent data is private. There are, however,  a few different ways to accomplish this such as a schema-per-service. Some applications benefit from a polyglot persistence architecture that uses a mixture of database types.  A downside of not sharing databases is that maintaining data consistency and implementing queries is more challenging.

Wednesday, January 1

How is AWS Lambda used in Localytics?

How is AWS Lambda used in Localytics?

Localytics is a Boston-based, web and mobile app analytics and engagement company. Its marketing and analytics tools are being extensively used by some major brands, such as ESPN, eBay, Fox, SalesForce and The New York Times, to understand and evaluate the performance of their apps and to engage with the existing as well as the new customers.

Use case in LocalyticsThe software developed by Localytics is employed in more than 37,000 apps on more than 3 billion devices all around the world.

Regardless of how popular Localytics is now, Localytics had faced some serious challenges before they started using Lambda.

Let’s see what the challenges were before we discuss how Lambda came to the rescue and helped Localytics overcome these challenges.

Challenges

  • Billions of data points uploaded every day from different mobile applications running Localytics analytics software are fed to the pipeline that they support.
  • Additional capacity planning, utilization monitoring, and infrastructure management were required since the engineering team had to access subsets of data in order to create new services.
  • The platform team was more inclined toward enabling self-service for engineering teams.
  • Every time a microservice was added, the main analytics processing service for Localytics had to be updated.


   

  • Localytics now uses AWS to send about 100 billion data points monthly through Elastic Load Balancing where ELB helps in distributing the incoming application traffic across multiple targets.
  • Afterward, it goes to Amazon Simple Queue Service where it enables us to decouple and scale microservices, distributed systems, and serverless applications.
  • Then, it reaches Amazon Elastic Compute Cloud and, finally, into an Amazon Kinesis stream that makes it easy to collect, process, and analyze real-time, streaming data so that we can get timely insights and can react quickly to new information.
  • With the help of AWS Lambda, a new microservice is created for each new feature of marketing software to access Amazon Kinesis. Microservices can access data in parallel.


 

MUSTREAD : How can you use Index Funds to help create wealth? HDFC MF Weekend Bytes

https://www.hdfcfund.com/knowledge-stack/mf-vault/weekend-bytes/how-can-you-use-index-funds-help-create-wealth?utm_source=Netcore...