Friday, January 17

Microservices Training | Microservices Docker Example | Microservices Tu...

Micro-services are self-contained, independent application units that each fulfill only one specific business function, so they can be considered small applications in their own right. What will happen if you decide to build several micro-service with different technology stacks? Your team will soon be in trouble as developers have to manage even more environments than they would with a traditional monolithic application.
The solution is: using micro-service and containers to encapsulate each micro-service. Docker is a tool that helps you manage those containers.

I am sharing a free youtube tutorial that we used to train our team developing micro-service.

https://youtu.be/UWl7X2fUWTM?t=21

Another tutorial on Microservices Docker, Kubernetes  




Sunday, January 12

Database design for Microservices

History of Microservices - 

A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that many of them had been recently exploring.In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those ideas as a case study in March 2012 at 33rd Degree in Kraków in Micro services - Java, the Unix Way as did Fred George about the same time. Adrian Cockcroft, former director for the Cloud Systems at Netflix, described this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article - Joe Walnes, Dan North, Evan Bottcher and Graham Tackley. <Source : Wikipedia>

Accepted Definition of Microservices - 

Microservices are a software development technique - a variant of the service-oriented architecture (SOA) structural style— that arranges an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.



 Database design for Microservices

The main idea behind microservices architecture is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. The main benefit of the microservices architecture is that it improves agility and reduced development time.When you  correctly decompose a system into microservices, you can develop and deploy each microservice independently and in parallel with the other services.


In order to be able to independently develop microservices , they must be loosely coupled. Each microservice’s persistent data must be private to that service and only accessible via it’s API . If two or more microservices were to share persistent data then you need to carefully coordinate changes to the data’s schema, which would slow down development.
There are a few different ways to keep a service’s persistent data private. You do not need to provision a database server for each service. For example,  if you are using a relational database then the options are:
  • Private-tables-per-service – each service owns a set of tables that must only be accessed by that service
  • Schema-per-service – each service has a database schema that’s private to that service
  • Database-server-per-service – each service has it’s own database server.
Private-tables-per-service and schema-per-service have the lowest overhead.  Using a schema per service is ideal since it makes ownership clearer. For some applications, it might make sense for database intensive services to have their own database server.
It is a good idea to create barriers that enforce this modularity. You could, for example, assign a different database user id to each service and use a database access control mechanism. Without some kind of barrier to enforce encapsulation, developers will always be tempted to bypass a service’s API and access it’s data directly.It might also make sense to have a polyglot persistence architecture. For each service you choose the type of database that is best suited to that service’s requirements. For example, a service that does text searches could use ElasticSearch. A service that manipulates a social graph could use Neo4j. It might not make sense to use a relational database for every service.
There are some downsides to keeping a service’s persistent data private. Most notably, it can be challenging to implement business transactions that update data owned by multiple services. Rather than using distributed transaction, you typically must use an eventually consistent, event-driven approach to maintain database consistency.
Another problem, is that it is difficult to implement some queries because you can’t do database joins across the data owned by multiple services. Sometimes, you can join the data within a service. In other situations, you will need to use Command Query Responsibility Segregation (CQRS) and maintain denormalizes views.
Another challenge is that  services sometimes need to share data. For example, let’s imagine that several services need access to user profile data. One option is to encapsulate the user profile data with a service, that’s then called by other services. Another option is to use an event-driven mechanism to replicate data to each service that needs it.
In summary,  it is important that each service’s persistent data is private. There are, however,  a few different ways to accomplish this such as a schema-per-service. Some applications benefit from a polyglot persistence architecture that uses a mixture of database types.  A downside of not sharing databases is that maintaining data consistency and implementing queries is more challenging.

Wednesday, January 1

How is AWS Lambda used in Localytics?

How is AWS Lambda used in Localytics?

Localytics is a Boston-based, web and mobile app analytics and engagement company. Its marketing and analytics tools are being extensively used by some major brands, such as ESPN, eBay, Fox, SalesForce and The New York Times, to understand and evaluate the performance of their apps and to engage with the existing as well as the new customers.

Use case in LocalyticsThe software developed by Localytics is employed in more than 37,000 apps on more than 3 billion devices all around the world.

Regardless of how popular Localytics is now, Localytics had faced some serious challenges before they started using Lambda.

Let’s see what the challenges were before we discuss how Lambda came to the rescue and helped Localytics overcome these challenges.

Challenges

  • Billions of data points uploaded every day from different mobile applications running Localytics analytics software are fed to the pipeline that they support.
  • Additional capacity planning, utilization monitoring, and infrastructure management were required since the engineering team had to access subsets of data in order to create new services.
  • The platform team was more inclined toward enabling self-service for engineering teams.
  • Every time a microservice was added, the main analytics processing service for Localytics had to be updated.


   

  • Localytics now uses AWS to send about 100 billion data points monthly through Elastic Load Balancing where ELB helps in distributing the incoming application traffic across multiple targets.
  • Afterward, it goes to Amazon Simple Queue Service where it enables us to decouple and scale microservices, distributed systems, and serverless applications.
  • Then, it reaches Amazon Elastic Compute Cloud and, finally, into an Amazon Kinesis stream that makes it easy to collect, process, and analyze real-time, streaming data so that we can get timely insights and can react quickly to new information.
  • With the help of AWS Lambda, a new microservice is created for each new feature of marketing software to access Amazon Kinesis. Microservices can access data in parallel.


 

Friday, December 6

Walking In The Cloud


I have been discussing Cloud Strategy with a group of architects and once again it became evident that most people still seem to think Cloud is just another product that you can adopt without really need for a enterprise wide Cloud Strategy. It is not as simple as selecting one cloud service provider from Amazon Web Services (AWS) , Google, Microsoft Azure, Salesforce, and IBM.. Enterprises should take help of Cloud Architect  or else take help from the Cloud service provider to define it's Cloud Strategy. For example if you are considering AWS then AWS architect will provide architecture guidance to the enterprise to define its cloud strategy. Each vendor has multiple products offerings for it's Cloud and it is important to understand how Cloud Service Provider will replace your hardware infrastructure and how it will provide a robust solution.

Key points to decide the cloud strategy
  • How data is going to be stored on the cloud and how you will save data center expenses?
  • How cloud will enable application to scale on demand ?
  • How Cloud Capacity planning is different from owned infrastructure capacity planning? 
  • How enterprise will save capital expenses by way of Pay-As-You-Use variable expense
  • How cloud will increase speed of delivery by way of using API and reusable services
  • How business will become more agile by moving to cloud based architecture
  • What are the additional features of individual cloud service provider that differentiate it from others

Many enterprises still lack clarity about their cloud strategy because they are not familiar with the Cloud offerings. An enterprise that wants to adopt the Cloud across all it's business units must have a mature and well-formed understanding of its Enterprise Architecture and a clear view of it's components. Enterprise Architecture Planning enables an enterprises to build structural foundations to support proposed business strategies. It captures the vision of an enterprise by integrating its dimensions to contextualize transformation strategies, organizational structures, business capabilities, data pools, IT applications, and all technology objects. Every business unit of an enterprise is subject to change, and each change may have significant consequences throughout organizational domains.

Cloud Computing is a paradigm to decentralize data centers, by visualizing both infrastructure and platform, and enable services using the internet. It gives access to platforms, services, and tools from browsers deployed across millions of terminals. As well, it reduces the management and maintenance of all the resources associated to technology and infrastructure while providing dynamism, independence, portability, usability, and salability of platform tools.
 

Sunday, November 10

Android Error Solution / Solved : Unable to detect adb version, exit value 0xc0000135

I have Windows 10 license but I prefer using Windows 8.1 on my development laptop. Recently I refreshed my Windows 8.1 and reinstalled Android Studio 3.5.2 and I started getting the following error as soon as I open the Android Studio 

The error basically happens because I have a fresh Windows 8.1 with all required updates but Windows requires  'Windows Universal C Runtime' which gets installed as part of Widows update and because of reinstall of Windows the update is missing. As mentioned in error text 'Android Studio' recommended solution is to download the C run-time package from Microsoft support website (URL is in the image below) or else you can download the 3 ADB specific files from internet and add them to your platform-tools folder on your Windows machine.

Android Studio Error : Unable to detect adb version, exit value 0xc0000135





 


Solution:

  1. Before you begin implementing changes backup the  platform-tools folder on your machine
  2. Add the platform-tools directory path into the system path environment variable.
  3. Replace the following 3 files in your platform_tools directory in the path
    (C:\Users\{YourAccount}\AppData\Local\Android\Sdk\platform-tools) As you might know the (Your Account) has to be replaced with your Windows Machine name. If you machine name is DataScience, YourAccount should be replaced by DataScience in the path. 
  4. And this is image of the 3 files that I downloaded and replaced in my platform-tools folder
  5. Restart Android Studio and the error should be fixed

Tuesday, November 5

Digital Medicine - Future of Research & Medical Care

The modern practice of scientific medicine depends on the existence of the written and printed information to store medical information. New digital tools can't just record clinical data, they can also generate medical intelligence by analyzing historical data. This leap of industry into "digital medicine" is potentially precise, effective, widely distributed & available to more people than the current medical practice. Critical steps in the creation of Digital Medicine are  analysis of the impact of new technologies & coordinated efforts to direct technological development towards creating a new paradigm of medical care. So Digital Technology can be used in two areas in medicine, to aid research and for medical care.


3D modelling is used to produce precise representations of anatomies in patients. This enables medical teams to plan and visualize complex surgeries & to produce life-saving implants and prostheses cuastomized for individual patients. It’s a remarkable evolution that is  having a tremendous impact on patient’s lives.However, this is only the tip of the iceberg. The true revolution in medicine and medical care in the 21st century will not come from such physical models, but from virtual ones. Looking into the future, these virtual models will be able to simulate the true physiology and Pathophysiology of human beings in coming years,  changing forever the way we research, diagnose and treat injuries and disease.
While your virtual twin may seem like a distant dream, progress in bringing this dream to life is actually already well underway in the nascent field of Bio-intelligence. Bio-Intelligence uses computer technologies to model, simulate, visualize and experience biological medical processes in a virtual environment. While drug makers have for some time modeled and screened virtual proteins and compounds against medical databases, drug development and production remain largely rooted in the real world, and collaboration between disciplines and organizations has been limited.
Every day, drug makers work to produce real drugs that they test on real animals, and then on real patients in real clinical trials. And the time and money they expend is staggering. According to studies companies can expect to spend $3 billion over a period of ten years to bring a single new drug to market
Add to this challenge the dynamism and complexity of living systems, and it becomes clear that a collaborative approach to research and development, along with the use of virtual modelling and simulation, could bring enormous benefits to life science and healthcare industries. Collaboration between scientific disciplines and between pharmaceutical companies, research labs, health service providers and computer companies would allow sharing of knowledge and experience to foster insight and innovation.
And, the collaborative use of computer models and simulation would enable researchers to better understand complex systems and more accurately predict the biological effects of various medicines and treatments, enabling drug makers in turn to fine tune real-world assays and eliminate ineffective treatments from trials before the drugs are even produced.
The changing landscape of research today is forcing the bioinformatics community to seek a new level of data sharing and collaboration only made possible with new platforms.Such approaches could also open the door to truly personalized healthcare medicine as collaboratively produced models and simulations are combined with real world data from individual patients. These changes could produce significant innovation and gains in efficiency, effectiveness and safety, bringing better heath treatment outcomes to everyone.
  • 11


Saturday, October 12

Smarter BPM using Blockchain concept



Blockchain based distributed ledgers have been used to enable collaboration in a number of environments ranging from diamond trading to securities settlement. Systems ability to execute defined scripts in the form of smart contracts along with blockchain Distributed Ledger Technology makes it capable of managing inter-organizational processes. Blockchain platforms that support both DLT and smart contracts should be capable of not only hosting business data but also the rules for managing the data. Smart contracts execute code directly on the blockchain network as a series of process steps, based on an algorithm programmed to the rules of the contract and the blockchain.



Multi-party Collaboration
Smart contracts can be used to implement business collaborations both within and external to the organization. A blockchain-based real estate registry would allow banks, government agencies, buyers, and sellers to collaborate and track the progress of a process in real-time. Specific aspects of inter-organizational business processes can be compiled into rules based smart contracts to ensure that processes are correctly executed. Smart contracts can independently monitor processes, so that only valid messages are accepted and are sent only from registered process participants. Security and accountability can be factored in the contract, as well as compliance with government regulations and internal rules and processes. 

Blockchain and smart Business Process Management
Even though smart contracts are self-executing, they can play a role in business process improvement. For example, in the case of supply chains, information from blockchain-based tracking of goods and materials can be used to develop algorithms that would prevent counterfeit products or lower quality materials from entering the chain. By combining process information gathered by the smart contract, with visualized process, lean and six sigma techniques, improvements can be made to the rules governing smart contracts.

Sunday, September 8

Digital India cannot be achieved without Health Insurance Portability and Accountability Act

Let me begin by reiterating the subject line - Digital India cannot be achieved without Health Insurance Portability and Accountability Act. America revolutionized is Healthcare with computers and when it noticed there was a need for a law to ensure compliance it passed  Health Insurance Portability and Accountability Act that also defines the requirement of Digital America and Digital Healthcare for America. If Indian government wants successful Ayushmaan Bharat which is similar to Obama Care of USA it cannot be achieved without2 important foundations
1) Data Protection Law to protect the healthcare and private data of every individual
2) Health Care Accountability Law that mandates certain standard of healthcare in every hospital

For an ordinary man 'Going Digital' means primarily storing information in 'Digital Format'. For government 'Going Digital' also means guaranteeing protection of privacy for its citizen and allowing use of healthcare data in such a manner that the data is Secure, Restricted to authorized entities, ensuring data privacy and should be made available to authorized entities over secured internet with minimal efforts.

When you go to a hospital for a medical test the test reports and your personal data are stored on some hospital computer system. The hospital gives you a print of your report and maintains your medical records
for a undisclosed period of time which could be infinite.

When you go to 2nd hospital to take a 2nd opinion you have to share your paper reports with doctor because your 1st hospital does not give you access to your report over internet in more than 99% of hospitals in #India. The 2nd hospital , he may ask you do another round of test and again give your reports in paper format.

After years a person has hundreds of pages of paper report and the report format varies from hospital to hospital because India does not mandate hospitals to have a standard format for medical records - a major failure of the Indian Medical Association, Government of India and other bodies who are responsible for implementing standards in healthcare.

USA government signed the Health Insurance Portability and Accountability Act of 1996.  The HIPAA Privacy Rule is composed of national regulations for the use and disclosure of Protected Health Information (PHI) in healthcare treatment, payment and operations by covered entities. HIPPA was created primarily to
  1. modernize the flow of healthcare information, 
  2. stipulate how Personally Identifiable Information maintained by the healthcare and healthcare insurance industries should be protected from fraud and theft, 
  3. and address limitations on healthcare insurance coverage.


HIPAA was created to “improve the portability and accountability of health insurance coverage” for employees between jobs to combat waste, fraud and abuse in health insurance and healthcare delivery. The act also contained passages to promote the use of medical savings accounts by introducing tax breaks, provides coverage for employees with pre-existing medical conditions and simplifies the administration of health insurance. The procedures for simplifying the administration of health insurance became a vehicle to encourage the healthcare industry to computerize patients´ medical records. This particular part of the Act spawned the Health Information Technology for Economic and Clinical Health Act (HITECH) in 2009, which in turn lead to the introduction of the Meaningful Use incentive program – described by leaders in the healthcare industry as “the most important piece of healthcare legislation to be passed in the last 20 to 30 years”


https://www.hipaajournal.com/hipaa-history/

https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act

Friday, August 16

How AI in Healthcare is performing diagnosis and saving lives at NHS

A doctor can use Optical Coherence Tomography (OCT) scanners to scan an eye and detect eye diseases. OCT scanners create around 65 million data points each time they are used – mapping each layer of the retina and that's lot of data for doctor to study. DeepMind's AI claims to recognise 50 common eye problems from the OCT data - which means a doctor does not have to spend time in analyzing the data. The results of AI have been promising in the trials considering the algorithms were correct 94.5 per cent of the time, which is equal to retina specialists doctors who were using extra notes along with the OCT scans.
                                       Deepmind & Google joined force in 2014 to accelerate AI research in healthcare and built medical assistant application for the National Health Scheme.. The significant AI work done by Deepmind in diagnosing eye diseases as effectively as the world’s top doctors, to in saving 30% of the energy used to keep data centers cool & to predict the complex 3D shapes of proteins is disruptive in field of Artificial General Intelligence (AGI).
The application called Streams is a mobile phone app that aims to provide timely diagnoses using AI so that right nurse or doctor get to the right patient in time and save the lilfe of patient who would have died otherwise. Each year, many thousands of patients in UK hospitals die from conditions like sepsis and acute kidney injury (AKI), because the warning signs aren't picked up and acted on in time

Streams mobile medical assistant for clinicians has been in use at the Royal Free London NHS Foundation Trust since early 2017. The app uses the existing national AKI algorithm to flag patient deterioration, supports the review of medical information at the bedside, and enables instant communication between clinical teams. Shortly after rolling out at the Royal Free, clinicians said that Streams was saving them up to two hours a day. We also heard about patients whose treatment was escalated thanks to the timely alert by the app. Statistics show that the app saved clinicians time, improved care and reduced the number of AKI cases being missed at the hospital.


The above figure shows how the automated process in the medical app saves time and connects doctor directly to the patient with serious condition.

There has been controversy around Google taking Over NHS data when DeepMind was taken over by Google in early 2017. DeepMind, which is now owned by Google used to operate the NHS app independently until 2017. DeepMind justified the decision explaining how Google would allow the app to scale in a way that would not be possible by itself.  Earlier in 2017 the Streams app attracted controversy after the UK’s data watchdog found that the NHS had illegally handed 1.6 million patient records to DeepMind as part of a trials. DeepMind subsequently made assurances that the medical data “will never be linked or associated with Google accounts, products or services”, and that all patient data will remain under the strict control of its NHS partners. As long as DeepMind does not share or link patient data with Google it will be major achievement for NHS in providing smarter health monitoring for AKI and many more diseases. 

Link to NHS Website-  link

Wednesday, August 7

Arnold Schwarzenegger motivational speech - Do you have a vision ?

I came across this motivational speech by Arnold Schwarzenegger. It is so relevant to people as well as software. Unless you have a dream and a vision of where you want to go you may not meet your goal. Most of the time the vision is like a dream which sounds too good to be true, too difficult to realize but you have to realize that it is your dream. there is something in you that realizes that you have it in you to that wants that dream to become reality.



Long time back when I was in college I came across a book in my fathers library. I read this book by Dr. Robert Schuller titled 'Success is never ending, failure is never final' and in that book he gives real life examples of so many dreams that he realized with his power of positive thinking. When he started with a dream he did not know how to realize the dream, he did not have a plan and the dream looked impossible. After dreaming the same dream in sleep and when awake his mind could slowly start getting a vision of the possible ways to realize the dream. It was slow process, took few days and it is important to believe in yourself and not give up your dream. Those who are mentally strong continue to spend a reasonable time nurturing the dream. Dr Sculler says it is here that your motivation is tested, if you are not passionate about your dream you give up on the dream and all the successful people we know have this one quality that they did not give up on their dream and even after minor failures they reevaluated the dream , re-imbibed the faith in their dream and started again.
              The human mind has this fantastic capability of processing information even when you are not awake and there is tons of material , research papers and books written about this subject. Often you will realize that when you have a problem and can't find a solution after a few days you think of s great ideas to solve the problem. I am not a scientist and I have never done any research on the subject I am writing about but I am passionate about these theories and from personal experience I believe when you are honest about solving some problem the mind does some processing in its spare time and one fine day dumps the solution to you. You may have had this experience and wondered why didn't I think about it sooner but what you should realize it that you, your mind or your subconscious mind - whatever you may like to call it was aware of the problem, the mind was processing all the time and it was finally come out with a solution  and this is no coincidence. What I want to say is your dream, your vision, your plan, your mind, your subconscious mind are all connected and when you are motivated they work together to realize your plans.

So why is a software developer / software architect talking about Vision? Well because when you build a software you follow the same technique that you follow to plan your life.
  • You want to solve a real life problem or a business problem
  • You are able to visualize a software that will solve the problem - in your mind you see the solution
  • You can convince people why and how the software is relevant and sell them the idea
  • You know what are the risks and how you will mitigate the risks 
  • You have a vision of how this software will be designed and what technologies will be used to build it 
  • You then then create the roadmap for implementing the software
  • You make a plan to implement the prototype 
  • Once the prototype is successful you create a plan to build the software in stages
  • You monitor the software development so that things go as per the plan
If you miss any of the steps you may end up with end product that is not perfect. Like #Arnold said at the beginning you should have a vision, hunger and belief. Vision is something that is built on your knowledge. After a year you acquire more knowledge and experience and you may realize that your vision needs some changes and it is perfectly ok. Your vision is outcome of careful deliberations and thoughts and it should not change everyday but Vision can always improve when you have new insights.

   


                                              



https://www.youtube.com/watch?v=eWJVvNptHZ4

Understanding Generative AI and Generative AI Platform leaders

We are hearing a lot about power of Generative AI. Generative AI is a vertical of AI that  holds the power to #Create content, artwork, code...