Monday, December 20

Upgrading from Healthcare Solutions to Humancare Solution : Part-1

Oxford dictionary defines healthcare as ' the organized provision of medical care to individuals or a community'. The crisp definition does not quite explain the purpose and goal of a good healthcare system.

According to me the complete definition of 'Healthcare' should be - an integrated system that proactively delivers care to individuals. A healthcare system should store and uses patient data and clinical data to provide better insights to patients health which in turn could helps the medical profession to give better service to the patient, at a lower cost. 

From a technology providers perspective a good healthcare system uses continuous advances in technology to connect and organize the disparate entities of healthcare landscape to deliver a seamless experience to individuals and entities. Every entity in heath care landscape benefits and profits from a good healthcare system but the ultimate beneficiary has to be individual seeking healthcare services.

What needs to change for Health Care to become Human Care?

What I am trying to say is that most of the healthcare systems that exist today are focused on delivering medical services rather than health care to individuals. There is a need to build health care systems that keep the individual care at core of system design and that means life long care of every individual who approaches the system. Once a individual requires medical services he becomes part of the healthcare system and the system should proactively monitor, manage & deliver health care to individual patients.  We are talking big, we are talking about system that is built around individual health care, we are talking about building a system that reaches out to individual rather than waiting for individuals to seek medical services because the purpose of a responsible society and medical community in a vibrant democracy is to ensure good health for every individual.

So what is required of a good health care system?

1) Keep a record of all individuals from birth or from the time they register

2) Own the responsibility of maintaining medical records of every registered individual

3) Use medical records and clinical data to proactively reach out to individual for health checkups

4) Post treatment of various chronic diseases proactively monitor health of registered individuals

5) Proactively deliver medical advises to all registered individuals 

6) Share and connect individual's medicals history across health care network

Let me take example of a cancer patient who becomes part of health care system at a age of 60yrs. Let's assume that after taking treatment the patient gets well and goes home and does not feel the need to approach the health care hospital. Health care providers know that cancer is a chronic disease and needs life long monitoring. The health care systems should device a health care plan for the cancer patient and proactively connect with the individual to check the individuals health and recommend timely checkups to check 'recurrence' of cancer. Recurrence is common in some types of cancer and as healthcare expert the system has data to predict possibility of recurrence and can save lives by doing periodic checkup.

 

Another example is of an individual who becomes part of the healthcare system when he gets treated for a coronary blockage. Medical professionals and healthcare system have data to show that even after removing the coronary blockage their is high probability that the patient 'with a heart condition' may face similar medical conditions over a period of time and requires periodic checkups. The point I am trying to put across is Health Care is not just providing Medical Services, health care is about providing care for health of individuals. We as experts of IT and medicine know we can provide the Health Care in true sense by designing smart system that use the individual and clinical data and save individual's lives. Individuals who often neglect medical conditions because of lack of knowledge and ignorance can be kept in the healthcare network by proactive followups. 

There is a cost associated with building such smart systems , maintaining data and proactively connect with every individual registered in the healthcare system. This cost is very small when we compare it to the medical expenses and suffering that individual has to bear if the diseases is not detected early. Insurance companies would love to have such smart health care systems that do proactive checkups and detect a medical condition which will help them save billions in treatment of the insured individuals. The challenge is we do not have such Smart Health Care systems that have built in Care Module that benefits individuals, insurance companies as well as health care providers because everybody wants affordable health care.

Smart Health Care is need of our society because                                                 

  • Smart Health Care ensures proactive monitoring and early detection of medical issues
  • Smart Health Care saves money spent on health of every individual
  • Smart Health Care ensure limited medical infrastructure can service more individuals
  • Smart Health Care ensures insurance companies pay less on medical treatments of their insured
  • Smart Health Care uses data for predicting diseases
  • Smart Health Care can help pharma industry to develop better medicine
  • Smart Health Care can help countries eradicate many diseases/illness
  • Smart Health Care ensures healthy and productive community
  • Smart Health Care is also a right of every individual


Smart Health Care, Covid and Data

#Covid is a latest use-case that proves that a Smart Health Care system would have simplified management of Covid cases, it would have helped us give better treatment to all registered individuals and it would have given us real time clinical data to find effective treatment procedure for epidemic like Covid. After months of treatment scientist found that certain medicine was not effective for treatment of Covid because we do not have a unified system to collect data of individuals. If we had every individual registered with one or more healthcare systems we could have analyzed data in real time and within weeks we could have identified the most effective treatment procedure and saved millions of lives. In 2021 everybody understand the value of data, unfortunately we do not have a system to collect, store and derive insights from the data. 

In the next post -

I hope you have followed my thought behind this post. In the next post I plan to share a high level design of a smart health care system that is beneficial as well as profitable to every entity in healthcare system.. A system that delivers benefit to individuals, to hospitals, to insurance companies as well as the scientist and pharma companies. I am talking about changing the way we look at health care 'as a service for those who want it' and make healthcare 'an essential service that takes care of people in an inclusive manner'. The time has come to move from Health Care to Human Care and guarantee proactive monitoring of health and timely and affordable treatement to every individual, to woman, men as well as new born children by plugging them to the healthcare network. 

In a connected world no human should be disconnected from Health Care network. When our public as well as private healthcare providers unite to build a seamless heatlcare network we can really truly deliver Human Care aka healthcare with human touch and not just medical treatmen to those who reach a hospital for treatment and those who can afford the hospital expenses.



Wednesday, November 3

World Of Health Care - Top 10 challenges and opportunities in Health Care

 Top 10 challenges and opportunities in Health Care 

1. Costs and transparency. Implementing strategies and tactics to address growth of medical and pharmaceutical costs and impacts to access and quality of care.

2. Consumer experience. Understanding, addressing, and assuring that all consumer interactions and outcomes are easy, convenient, timely, streamlined, and cohesive so that health fits naturally into the “life flow” of every individual’s, family’s and community’s daily activities.

3. Delivery system transformation. Operationalizing and scaling coordination and delivery system transformation of medical and non-medical services via partnerships and collaborations between healthcare and community-based organizations to overcome barriers including social determinants of health to effect better outcomes.

4. Data and analytics. Leveraging advanced analytics and new sources of disparate, non-standard, unstructured, highly variable data (history, labs, Rx, sensors, mHealth, IoT, Socioeconomic, geographic, genomic, demographic, lifestyle behaviors) to improve health outcomes, reduce administrative burdens, and support transition from volume to value and facilitate individual/provider/payer effectiveness.

5. Interoperability/consumer data access. Integrating and improving the exchange of member, payer, patient, provider data, and workflows to bring value of aggregated data and systems (EHR’s, HIE’s, financial, admin,  and clinical data, etc.) on a near real-time and cost-effective basis to all stakeholders equitably.

6. Holistic individual health. Identifying, addressing, and improving the member/patient’s overall medical, lifestyle/behavioral, socioeconomic, cultural, financial, educational, geographic, and environmental well-being for a frictionless and connected healthcare experience.

7. Next-generation payment models. Developing and integrating technical and operational infrastructure and programs for a more collaborative and equitable approach to manage costs, sharing risk and enhanced quality outcomes in the transition from volume to value (bundled payment, episodes of care, shared savings, risk-sharing, etc.).

8. Accessible points of care. Telehealth, mHealth, wearables, digital devices, retail clinics, home-based care, micro-hospitals; and acceptance of these and other initiatives moving care closer to home and office.

9. Healthcare policy. Dealing with repeal/replace/modification of current healthcare policy, regulations, political uncertainty/antagonism and lack of a disciplined regulatory process. Medicare-for-All, single payer, Medicare/Medicaid buy-in, block grants, surprise billing, provider directories, association health plans, and short-term policies, FHIR standards, and other mandates.

10. Privacy/security. Staying ahead of cybersecurity threats on the privacy of consumer and other healthcare information to enhance consumer trust in sharing data. Staying current with changing landscape of federal and state privacy laws.

“We are seeing more change in the 2020 HCEG Top 10 than we have seen in recent years and for good reason. HCEG member organizations express that the demand for, and pace of change and innovation is accelerating as healthcare has moved to center stage in the national debate. It shouldn’t be surprising that costs and transparency are at the top of the list along with the consumer experience and delivery system transformation,” says Ferris W. Taylor, Executive Director of HCEG. “Data, analytics, technology, and interoperability are still ongoing challenges and opportunities. At the same time, executives need to be cautious, as individual health, consumer access, privacy, and security are on-going challenges that also need to remain as priorities.”  

Turning challenges into opportunities

Reducing costs means lower revenue for providers and almost all of the players in healthcare––except for consumers and payers, says Mark Nathan, CEO and founder of Zipari, a health insurtech company. So while there are many incentives to keep healthcare costs high, if consumers are provided with the information they need to improve their health and drive down their personal costs, then we could see consumers en mass making decisions that drive down costs across the industry, he adds.

“Predicting cost in the traditional health insurance environment is shockingly complex,” Nathan says. “The most advanced payers can simulate claims and predict the cost of procedures. However, as you layer in full episodes of care, such as knee surgery, it becomes much harder to accurately predict the patient's total out-of-pocket cost. Bundled value-based payments start to make cost transparency a little easier to predict, but most plans still have a way to go to get to that type of offering.”

The greatest opportunity to drive down health costs––for payers, consumers, and system-wide––is with the payer-consumer relationship, he says. “Payers have the information consumers need to make better decisions about their health and finances––if plans can build positive and trusted relationships with their members. Once a payer proves it can make valuable and trusted recommendations, the consumer can make the decisions that will not only lead to better health outcomes but also to reduced cost of care.”


Saturday, June 12

Agile vs. Scrum

Two of the most common (and often conflated) approaches to project management are Agile and Scrum. Developers often ask how Scrum and Agile  are different from one another, and how to choose the right approach for your project?
 
What is Agile Project Management?

Agile project management is a project philosophy or framework that takes an iterative approach towards the completion of a project. According to Project Management Institute (PMI) the goal of the Agile approach is to create early, measurable ROI through defined, iterative delivery of product features.

Due to the iterative nature of Agile approaches, continuous involvement with the client is necessary to ensure that the expectations are aligned and to allow the project manager to adapt to changes throughout the process.

Agile is primarily a project management philosophy centered on specific values and principles. Think of Agile broadly as a guiding orientation for how we approach project work. The hallmark of an Agile approach is those key values and principles which can then be applied across different, specific methodologies.  

If you're following an Agile philosophy in managing your projects, you'll want to have regular interactions with the client and/or end-users; you're committed to a more open understanding of scope that may evolve based on feedback from end-users; and you'll take an iterative approach to delivering the scope of work," Griffin says.

There are many different project management methodologies used to implement the Agile philosophy. Some of the most common include Kanban, Extreme Programming (XP), and Scrum.

What is Scrum Project Management?

Scrum project management is one of the most popular Agile methodologies used by project managers.

"Whereas Agile is a philosophy or orientation, Scrum is a specific methodology for how one manages a project," Griffin says. "It provides a process for how to identify the work, who will do the work, how it will be done, and when it will be completed by."

In Scrum project management, the project team, led by the project manager, consists of a product owner, Scrum master, and other cross-functional team members. The product owner is responsible for maximizing the value of the product, while the Scrum master is accountable for ensuring that the project team follows the Scrum methodology.

The Scrum methodology is characterized by short phases or "sprints" when project work occurs. During sprint planning, the project team identifies a small part of the scope to be completed during the upcoming sprint, which is usually a two to four week period of time.

At the end of the sprint, this work should be ready to be delivered to the client. Finally, the sprint ends with a sprint review and retrospective—or rather, lessons learned. This cycle is repeated throughout the project lifecycle until the entirety of the scope has been delivered. This mirrors aspects of traditional project management. One of the key differences, however, is how one creates "shippable" portions of the project along the way rather than delivering everything at the very end. Doing so allows the client to realize the value of the project throughout the process rather than waiting until the project is closed to see results.

What are the differences between Agile and Scrum ?

On the surface, it is easy to see why Agile and Scrum can often be confused, as they both rely on an iterative process, frequent client interaction, and collaborative decision making. The key difference between Agile and Scrum is that while Agile is a project management philosophy that utilizes a core set of values or principles, Scrum is a specific Agile methodology that is used to facilitate a project.

There are also other notable differences between Agile and Scrum.

Key Differences:

Agile is a philosophy, whereas Scrum is a type of Agile methodology
Scrum is broken down into shorter sprints and smaller deliverables, while in Agile everything is delivered at the end of the project
Agile involves members from various cross-functional teams, while a Scrum project team includes specific roles, such as the Scrum Master and Product Owner

It's important to remember that although Scrum is an Agile approach, Agile does not always mean Scrum—there are many different methodologies that take an Agile approach to project management.

Agile vs. Other Methodologies

While Agile and Scrum often get most of the attention, there are other methodologies you should be aware of. Below is a look at how Agile compares to Waterfall and Kanban, two popular project management strategies.
Agile vs. Waterfall

Waterfall project management is another popular strategy that takes a different approach to project management than Agile. While Agile is an iterative and adaptive approach to project management, Waterfall is linear in nature and doesn't allow for revisiting previous steps and phases.

Waterfall works well for small projects with clear end goals, while Agile is best for large projects that require more flexibility. Another key difference between these two approaches is the level of stakeholder involvement. In Waterfall, clients aren't typically involved, whereas in Agile, client feedback is crucial.

Agile vs. Kanban

Kanban project management is a type of Agile methodology that seeks to improve the project management process through workflow visualization using a tool called a Kanban board. A Kanban board is composed of columns that depict a specific stage in the project management process, with cards or sticky notes representing tasks placed in the appropriate stage. As the project progresses, the cards will move from column to column on the board until they are completed.

A key difference between Kanban and other Agile methodologies, such as Scrum, is that there are typically limitations regarding how many tasks can be in progress at one time. Project management teams will typically assign a specific number of tasks to each column on the board, which means that new tasks cannot begin until others have been completed.

Agile vs. Scrum: Choosing the Right Project Methodology

Once you have a clear understanding of what Agile and Scrum are and how they work together, you can begin to think about applying these approaches to your own projects. But, given the differences between the two, this shouldn't be a question of whether you should take an Agile or a Scrum approach.

Instead, if you decide that an Agile approach is right for a particular project, the question is: Which Agile methodology should you use? The answer could be Scrum, or it could be one of the other various Agile methodologies that exist.

To decide if Agile is right for your project, you'll need to look at the specific requirements and constraints involved. Agile was originally created within the context of software development projects and is particularly effective in this arena. With this in mind, an Agile approach will not be effective for projects with very strict scope and development requirements. However, the guiding principles of the Agile philosophy are widely used across many different types of projects.

If an Agile approach is right for your project, you will then need to determine whether or not Scrum is the best Agile methodology for your specific needs and goals. Scrum is typically best suited to projects which do not have clear requirements, are likely to experience change, and/or require frequent testing.

It's important to remember that the key to a successful project isn't just about choosing the right methodology, but executing that methodology in a skillful manner. Doing so requires an expert understanding of the methodology you ultimately decide to employ in conjunction with other critical project management skills.  To be successful in their roles, project managers also need to know how to communicate effectively, lead a team, apply critical thinking and problem-solving skills, and be adaptable to the organizational dynamics and complexities around them.

Sunday, May 23

Workday Architecture

Workday Software As A Service

Workday is considered to be a leader in HR, Payroll, and financial management services. Workday is a top SaaS-based cloud enterprise solution for performing many human resource business operations. Workday is an American cloud-based software company; it was founded by David Duffield (CEO of ERP based company PeopleSoft) and Aneel Bhushri in the year 2005. Workday headquarters located in Pleasanton, California (United States of America). The main purpose of the Workday cloud-based management tool is to provide many SaaS-based services such as managing human resources, financial management, offers new levels of enterprise agility for deploying, buying, and also to maintain the legacy of on-premise applications. The workday tool has been used by more than 200 companies (mid-level to top-level companies as well as Fortune 500 companies). The workday tool is distinguished into different modules; two modules are considered to be top-most such as Workday Human capital management and Workday Financial management modem. These two modules play a key role in providing an unparalleled agility service, easy-to-manage, and high-level integration capacity. The top partners of the Workday organization are Ceridian, Kronos, Plateau, Salesforce.com, ERP, Comerstone OnDemand, NETtime solutions, Patersons, Safeguard world internationally, Stepstone solutions, and Taleo corporations.



At the heart of the architecture are the Object Management Services (OMS), a cluster of services that act as an in-memory database and host the business logic for all Workday applications. The OMS cluster is implemented in Java and runs as a servlet within Apache Tomcat. The OMS also provides the runtime for XpressO — Workday’s application programming language in which most of our business logic is implemented. Reporting and analytics capabilities in Workday are provided by the Analytics service which works closely with the OMS, giving it direct access to Workday’s business objects.

The Persistence Services include a SQL database for business objects and a NoSQL database for documents. The OMS loads all business objects into memory as it starts up. Once the OMS is up and running, it doesn’t rely on the SQL database for read operations. The OMS does, of course, update the database as business objects are modified. Using just a few tables, the OMS treats the SQL database as a key-value store rather than a relational database. Although the SQL database plays a limited role at runtime, it performs an essential role in the backup and recovery of data.

The UI Services support a wide variety of mobile and browser-based clients. Workday’s UI is rendered using HTML and a library of JavaScript widgets. The UI Services are implemented in Java and Spring.

The Integration Services provide a way to synchronize the data stored within Workday with the many different systems used by our customers. These services run integrations developed by our partners and customers in a secure, isolated, and supervised environment. Many pre-built connectors are provided alongside a variety of data transformation technologies and transports for building custom integrations. The most popular technologies for custom integrations are XSLT for data transformation and SFTP for data delivery.

The Deployment tools support new customers as they migrate from their legacy systems into Workday. These tools are also used when existing customers adopt additional Workday products.

Workday’s Operations teams monitor the health and performance of these services using a variety of tools. Realtime health information is collected by Prometheus and Sensu and displayed on Wavefront dashboards as time series graphs. Event logs are collected using a Kafka message bus and stored on the Hadoop Distributed File System, commonly referred to as HDFS. Long-term performance trends can be analyzed using the data in HDFS.


As per google and Gartner's report, 2019 workday has considered as a leader in the data integrations. Workday acts as a middleware which will host the data integration and also transmits the data. Workday is developed to help the financial management, human resource team, and payroll management in any organization. I hope this blog may help a few of you to learn and gain valuable information in Workday.

Friday, April 30

Complex Event Processing on AWS - AWS EventBridge

Complex Event Processing on AWS

Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated Software-as-a-Service (SaaS) applications, and AWS services. 

EventBridge delivers a stream of real-time data from event sources. Routing rules determine where to send your data to build application architectures that react in real- time to your data sources with event publisher and consumer completely decoupled. Amazon EventBridge enables developers to route events between AWS services, integrated software as a service (SaaS) applications, and your own applications. It can help decouple applications and produce more extensible, maintainable architectures. 

With the new API destinations feature, EventBridge can now integrate with services outside of AWS using REST API calls.

EventBridge architecture

Event-driven architecture enables developers to create decoupled services across applications. When combined with the range of managed services available in AWS, this approach can make applications highly scalable and flexible, with minimal maintenance.

Many services in the AWS Cloud produce events, including integrated software as a service (SaaS) applications. Your custom applications can also produce and consume events. With so many events from different sources, you need a way to coordinate this traffic. Amazon EventBridge is a serverless event bus that helps manage how all these events are routed throughout your applications.

The routing logic is managed by rules that evaluate the events against event expressions. EventBridge delivers matching events to targets such as AWS Lambda, so you can process events with your custom business logic.

Eventbridge working

A banking application for automated teller machine (ATM) produces events about transactions. It sends the events to EventBridge, which then uses rules defined by the application to route accordingly. There are three downstream services consuming a subset of these events.

Sample ATM application architecture



Wednesday, April 28

Chef - The Expert Cook of DevOps

A Devops engineer spends more time in deploying new services and application, installing and updating network packages and making machine server ready for deployment. This takes tedious human efforts and requires huge human resources.By using configuration management tools like Chef or Puppet you can deploy, repair and update the entire application infrastructure using automation.

Chef is an automation tool that can deploy, repair and update and also manage server and application to any environment.

What is Chef?

Chef is a Configuration management tool that manages the infrastructure by writing code rather than using a manual process so that it can be automated, tested and deployed  easily. Chef has Client-server architecture and it supports multiple platforms like Windows, Ubuntu and Solaris etc. It can also be integrated with cloud platform like AWS, Google Cloud, and Open Stack etc. 

Understanding Configuration Management

Configuration Management

Let us take an example of a system engineer in an organization who wants to deploy or update software or an operating system on more than hundreds of systems in your organization in one day. This can be done manually but it may cause multiple errors, some software’s may crash while updating and we won’t be able to revert back to the previous version. To solve such kind of issues we use Configuration management tools.

Configuration Management keeps track of all the software and hardware related information of an organization and it also repairs, deploys and updates the entire application with its automated procedures. Configuration management does the work of multiple System Administrators and developers who manage hundreds of server and application. Some popular tools used for Configuration management are Chef, Puppet, Ansible, CF Engine, and SaltStack.

Why I prefer Chef?

Let us take a scenario, suppose we want our system administrator to install, update and deploy software on hundreds of system overnight. When the system engineer does this task manually it may cause Human errors and some software’s may not function properly. At this stage, we use Chef which is a powerful automated tool which transfers infrastructure into code.

Why Chef

Chef automates the application configuration, deployment and management throughout the network even if we are operating it on cloud or hybrid. We can use chef to speed up the application deployment. Chef is a tool for accelerating software delivery, the speed of software development refers to how quickly the software is able to change in response to new requirements or conditions

Benefits of Chef

Accelerating software delivery

 By automating infrastructure provisioning we automate all the software requirements like testing, creating new environments for software deployments etc. becomes faster.

Increased service Resiliency, 

By making the infrastructure automated we can monitors for bugs & errors before they occur it can also recover from errors more quickly.

Risk Management

Automation tool like Chef or Puppet lowers risk and improves compliance at all stages of deployment. It reduces the conflicts during the development and production environment.

Cloud Adoption  

Chef can be easily adapted to a cloud environment and the servers and infrastructure can be easily configured, installed and managed automatically by Chef.

Managing Data Centers & Cloud Env 

Chef can run on different platforms, under chef you can manage all your cloud and on-premise platforms including servers.

Streamlined IT operation & Workflow 

Chef provides a pipeline for continuous deployment starting from build to test and all the way through delivery, monitoring, and troubleshooting.


In summary Chef tools help IT teams adopt modern day best practices including:

  • Test Driven Development: Configuration change testing becomes parallel to application change testing.
  • AIOps Support: IT operations can confidently scale with data consolidations and 3rd party integrations.
  • Self-Service: Agile delivery teams can provision and deploy infrastructure on-demand.


Friday, April 16

Infrastructure as Code

How do you manage your applications & IT infrastructure?

How do you provision, configure, manage, and scale up various elements on-demand for a large-scale IT infrastructure made up of networks, databases, servers, storage, operating systems, and other elements? 

Traditionally a dedicated team of system administrators, specialists, manually performed the tasks as and when the need arose.  Resource provisioning is a complex task and Agility, flexibility, and Cost-Effectiveness all came at the cost of each other.

Infrastructure as Code is a perfect solution to these challenges. IaC enables enterprise to automate infrastructure provisioning and scaling, which accelerates the speed at which cloud applications are developed, deployed, and scaled at a reduced cost.



What is Infrastructure-as-Code (IaC) and How Does it Work?

Infrastructure as Code uses machine-readable definition files (aka templates) that use high-level descriptive coding language to automate IT infrastructure provisioning. Human intervention is minimized and developers can focus on the application development and deployment rather than its resource needs.

IaC borrows software development lifecycle practices to automate resource provisioning. When there are changes in resource allocation and provisioning strategies, the changes are made to the definition files and rolled out to systems through unattended processes,  after thorough validation.

So, humans do not manually provision or configure the resources. They do not set up new hardware or software systems to support their applications. Everything happens at the code level.

Infrastructure-as-Code (IaC) Workflow


IaC Workflow

 

Benefits of Infrastructure-as-Code (IaC)

The benefits of IaC are intuitive and precisely what you'd imagine them to be:

1. Lower Costs

You don't have to hire a team of professionals to routinely manually manage resource provisioning, configuration, troubleshooting, hardware setup, and so on. It saves time and money.

2. Speedy Provisioning

IaC automates resource provisioning across environments, from development to deployment, by simply running a script. It drastically accelerates the software development life-cycle and makes your organization more responsive to external challenges.

3. Consistency

People commit mistakes. That's a fact. No matter how well you communicate and how much effort your team puts in, they are bound to make errors – several of them, in fact. In the case of IaC, the definition files are a single source of truth. There's never any confusion about what they do. You execute them repeatedly and get predictable results every time.

4. Accountability

When you need to trace changes to definition files, you can do it with ease. They are versioned, and therefore all changes are recorded for your review at a later point. So, once again, there's never any confusion on who did what.

5. Resilience

Resource provisioning and configuration is often a labor-intensive and skill-intensive task, usually handled by skilled resources. When one or more of them leave the organization, they take their knowledge with them. With IaC, resource provisioning intelligence remains with the organization in the form of definition files.

5 Challenges of Infrastructure-as-Code (IaC)

Every new solution comes with a new set of problems, and IaC is no different. The very things that make IaC so powerful and efficient also present some unique challenges to the organizations. Here's a brief overview of them:

1. Accidental Destruction

In theory, once you get automated systems operational, they do not need constant management beyond the periodic fix or replace tasks. However, in reality, even automated systems encounter problems, and these problems accumulate over time into massive system-level disasters. In technical parlance, it's also called erosion.

2. Configuration Drift

Automated configuration often leads to the drifting of infrastructure elements over time. For instance, a fix introduced to one server may not be replicated on all servers. Although differences aren't always bad, it's essential to document and manage them.

3. Lack of Expertise

Creating definition files and testing them to ensure that they work flawlessly requires an in-depth knowledge of all the elements that comprise the organization's IT infrastructure. That's a rare set of skills.

4. Lack of Proper Design and Planning 

Automation initiatives involve many unknowns, and it is vital to identify them and address them in the planning stage. Persistent testing and staggered implementation of automation projects can give skeptic business leaders the knowledge and confidence they need to helm more automation projects in the future.

5. Error Replication

In manual processes, it's easy to track human actions and replicate errors. With automation involved, error replication becomes an arduous task. Analyzing log files, workflows, and other data may not help system administrators recreate the error conditions accurately.

5 Principles of Infrastructure-as-Code (IaC)

The following principles are designed to help you maximize your ROI from your IaC strategy without falling into common pitfalls.

1. Reproduce Systems Easily

Your IaC strategy should help you build and rebuild any element of your IT infrastructure with ease and speed. They should not require significant human effort or complex decision-making.

All the tasks involved – from choosing the software to be installed to its configuration – must be coded into the definition files. The scripts and the tools that manage resource provisioning should have the information to perform their tasks without human intervention.

2. Idempotence

Meticulous business leaders are naturally skeptical of automated systems and their ability to perform complex tasks. Therefore, IaC must offer consistency, no matter how many times it is executed. For instance, when new servers are added, they must be identical (at least near-identical) to the current servers in capacity, performance, and reliability. This way, whenever new infrastructure elements are added, all decisions from configuration to hosting names are automated and predetermined.

Of course, some level of configuration drift will creep into the system with time, but that must be recorded and managed.

3. Repeatable Processes

System administrators have a natural affinity towards intuitive tasks. When resource allocation is imminent, they prefer to do it the most intuitive way – assess the resource requirements, determine the best practices, and provision resources.

Although effective, such a process is counter-productive to automation. IaC demands that the system administrators think in scripts. Their tasks must be broken down or clubbed together into repeatable processes which can be codified in scripts.

To be fair, this method introduces some level of rigidness into the system. If one server needs an additional partition of 40GB and another needs 80GB, IaC principles dictate that an overarching script that allocates same-size partitions to the servers be executed. In this case, 80GB partition would be the ideal choice.

4. Disposable Systems

IaC acutely depends on reliable and resilient software to make hardware reliability irrelevant to system operations. In the cloud era, where the underlying hardware may or may not be reliable, organizations cannot let hardware failures disrupt their businesses. Therefore, software-level resource provisioning ensures that hardware failure situations are immediately responded with alternate hardware allocation so that IT operations remain uninterrupted.

Dynamic infrastructure that can be created, destroyed, resized, moved, and replaced sit at the core of IaC. It should handle infrastructure changes like resizing and expansions gracefully.

5. Ever-evolving Design

The IT infrastructure design keeps changing to accommodate the evolving needs of the organization. Because infrastructure changes are expensive, organizations try to limit them by meticulously predicting future requirements with accuracy and then design the systems accordingly. These overly complex designs further make future changes even more difficult and therefore, more expensive.

IaC-driven cloud infrastructure tackles this problem by simplifying change management. While the current systems are designed to meet current requirements, future changes must be easy to implement. The only way to ensure that change management is easy and quick is by making changes frequently so that all stakeholders are aware of the common issues and create scripts that overcome the respective issues effectively.

Conclusion

IaC, when implemented correctly, brings down manual tasks of critical employees like software developers and lets them focus on DevOps delivery by working on continuous improvements instead. IaC eliminates infrastructure bottlenecks to a large extent and reduces infrastructure provisioning to self-service.

For the new age businesses that need to be agile, flexible, and responsive to market challenges, IaC is a perfect solution.


Wednesday, March 3

What role are Artificial Intelligence & Machine Learning playing in tackling Covid-19 Crisis


Artificial intelligence & Machine Learning are playing a key role in better understanding & addressing the COVID-19 crisis. Machine learning technology enables computers to mimic human intelligence and ingest large volumes of data and identify patterns and insights in short time. In the fight against COVID-19, organizations have been quick to apply their machine learning expertise in several areas: scaling customer communications, understanding how COVID-19 spreads, and speeding up research and treatment.

Enabling organizations to adapt to the new normal


Covid was unexpected blow to every organization in every possible way. Every small & large organization is finding new ways to operate effectively to meet the needs of their customers and employees as social distancing and quarantine measures remain in place. Machine learning technology is playing an important role in enabling that shift by providing the tools to support areas from remote communication, enable tel-medicine & to provide food security.

When every country of the world is trying to restructure & reboot AI and ML are playing a big role in helping companies. For healthcare institutions using machine learning-enabled chat-bots for contactless screening of COVID-19 symptoms and to answer questions from the public. One example is Covid chatbot to make it easier for people to find official government communications about COVID-19. Powered by real-time information from the government and the World Health Organization, the chatbot assesses known symptoms and answers questions about government policies. With almost 3 million messages sent to-date, this chatbot is able to answer questions on everything from exercise to an evaluation of COVID-19 risks, without further straining the resources of healthcare and government institutions. Many European countries like France are using the chatbot to decentralize the distribution of accurate, verified information.

Agriculture Solutions

Machine learning is also helping Agri-Tech & food supply chain. To avoid any disruption to the food supply chain, food processors and governments need to understand the current state of agriculture. Agri-Tech companies provide AI-driven crop-monitoring solution to retailers free of charge to provide additional resiliency and certainty to supply chains in the UK. The technology works to assess satellite images of crops to flag potential issues to farmers and retailers early on so they can better manage supply, procurement and inventory planning. The platform deploys custom machine learning models to mix imagery from multiple satellites, enabling a near real-time assessment of agricultural conditions.
Crop identification algorithm layered on top of satellite image.

COVID-19 Research

Machine learning is helping researchers and practitioners analyze large volumes of data to forecast the spread of COVID-19, in order to act as an early warning system for future pandemics and to identify vulnerable populations. Researchers building models to estimate the number of COVID-19 infections that go undetected and the consequences for public health, analyzing 12 regions across the globe. Using machine learning and partnering with the Diagnostic Development Initiative, they have developed new methods to quantify undetected infections – analyzing how the virus mutates as it spreads through the population to infer how many transmissions have been missed.

AI to detect disease outbreaks -

In Boston Children's Hospital web based application Heatmap uses artificial intelligence and data mining to spot disease outbreaks and issue location-specific alerts (colored dots) on COVID-19 and other diseases. It sounded an early alarm on the pandemic.At the beginning of this pandemic, BlueDot, a Canadian start-up that uses AI to detect disease outbreaks, was one of the first to raise the alarm about a worrisome outbreak of a respiratory illness in Wuhan, China. BlueDot uses AI to detect disease outbreaks. Using their machine learning algorithms, BlueDot sifts through news reports in 65 languages, along with airline data and animal disease networks to detect outbreaks and anticipate the dispersion of disease. Epidemiologists then review those results and verify that the conclusions make sense from a scientific standpoint. BlueDot provides those insights to public health officials, airlines and hospitals to help them anticipate and better manage risks.

 


Machine learning is helping leaders make more informed decisions in the face of COVID-19. In March, a group of volunteer professionals, led by former White House Chief Data Scientist DJ Patil, reached out to AWS for help supporting a scenario-planning tool that modeled the potential impact of COVID-19 in order to answer questions like: “How many hospital beds will we need?” or “For how long should we issue a shelter-in-place order?” They needed to scale their open-source model so governors across the US could understand the volume of exposure, infection and hospitalization to better inform their response plans. In close partnership with AWS and Johns Hopkins Bloomberg School of Public Health, the group moved the model to the cloud, allowing them to run multiple scenarios in just hours and to roll out the model to all 50 states and internationally to help with making decisions that directly impact the global spread of COVID-19.

Organizations are also examining ways to limit the spread of COVID-19, particularly among vulnerable populations. Closedloop, an AI start-up that we work with, is using their expertise in healthcare data to identify those at the highest risk of severe complications from COVID-19. Closedloop has developed and open-sourced a COVID vulnerability index, an AI-based predictive model that identifies people most at-risk of severe complications from COVID-19. This 'C-19 Index' is being used by healthcare systems, care management organizations and insurance companies to identify high-risk individuals, then calling them to share the importance of hand-washing and social distancing, and also offering to deliver food, toilet paper, and other essential supplies so they can stay at home.

MUSTREAD : How can you use Index Funds to help create wealth? HDFC MF Weekend Bytes

https://www.hdfcfund.com/knowledge-stack/mf-vault/weekend-bytes/how-can-you-use-index-funds-help-create-wealth?utm_source=Netcore&...