Friday, February 21

The Risks of Using Chinese DeepSeek AI in Indian Government Offices: A Data Security Perspective

Introduction

Artificial Intelligence is transforming governance, enhancing efficiency, and automating decision-making. However, when deploying AI solutions, especially from foreign entities, national security and data privacy must be top priorities. The recent rise of Chinese AI models, such as #DeepSeek, raises significant concerns if deployed within Indian government offices.
 

Understanding DeepSeek AI

#DeepSeek AI, developed by Chinese firms, is an advanced generative AI model comparable to OpenAI's ChatGPT or Google Gemini. While it offers powerful language processing, the core issue is data sovereignty—who owns, accesses, and controls the data that flows through these systems.

Key Data Leak Concerns

1. Data Storage and Transmission Risks

Many AI models rely on cloud-based processing, meaning data entered into #DeepSeek AI might be stored on servers outside India. If hosted in China, it could fall under Chinese Cybersecurity Laws, which mandate that all data stored on Chinese servers be accessible to their government. This creates a high risk of unauthorized access to sensitive Indian government data.

2. AI Model Training and Retention of Sensitive Information

DeepSeek AI, like other generative AI models, continuously improves by learning from user inputs. If government officials unknowingly enter classified information, the model could retain and use this data in future responses. This creates a leakage pathway for confidential communications, defense strategies, and policy decisions.

3. Potential for AI-Based Espionage

China has been accused of using AI-driven data collection to support cyber espionage. If DeepSeek AI is embedded into Indian government operations, it could potentially be leveraged to:
 
Monitor government discussions

Analyze sensitive trends in policymaking

Extract metadata about officials, agencies, and strategies

Such risks make it untenable for a foreign AI system, especially from a geopolitical rival, to be integrated into government workflows.

Real-World Example: How a Data Leak Could Happen

Scenario: A Government Employee Uses DeepSeek AI to Draft a Report

Imagine an officer in the Ministry of Defence (MoD) is tasked with preparing a classified report on India's border security strategies in Arunachal Pradesh. To speed up the process, they enter sensitive details into DeepSeek AI, asking it to refine and format the document.

What Happens Next?

1. Data Sent to Foreign Servers:

DeepSeek AI processes the request on its servers, which may be located in China or other foreign jurisdictions. The model may store or analyze this sensitive input for further training.

2. Hidden Data Trails in PDF Files:

The AI-generated report is downloaded as a PDF and shared internally within the ministry. However, AI-generated PDFs often contain metadata, such as input prompts, IP addresses, timestamps, and even hidden AI-generated summaries of user interactions. If a cyberattack targets the ministry, these documents could reveal what was asked from the AI, including confidential border troop movements, defense procurement plans, and diplomatic strategies.

3. Potential Cyber Espionage via AI Logs:

If DeepSeek retains logs of AI interactions, Chinese intelligence agencies could access fragments of sensitive information that were input by multiple Indian government users. Over time, even seemingly harmless prompts could help adversaries piece together critical insights about India's defense and economic policies.

Another Example: Finance Ministry & Budget Leaks

A Finance Ministry officer drafts an early version of India's Union Budget using DeepSeek AI to refine tax policy announcements.  The AI processes tax adjustments, subsidies, and proposed infrastructure allocations. If this data is retained or intercepted, it could provide foreign entities an unfair advantage in financial markets, potentially leading to stock market manipulation before the budget is officially announced.

4. Compliance with Indian Data Protection Laws

India's Digital Personal Data Protection Act (DPDP), 2023, mandates strict controls over cross-border data transfers. If DeepSeek AI processes government data outside India, it could violate these regulations, leading to legal repercussions and national security concerns.

Government Action Needed

1. Ban on Foreign AI in Sensitive Departments

India should restrict foreign AI tools from being used in government offices, especially in defense, law enforcement, and strategic sectors.

2. Development of Indigenous AI

Instead of relying on Chinese AI, India should focus on strengthening its own AI ecosystem through initiatives like Bhashini, IndiaAI, and partnerships with Indian tech firms.

3. Security Audits and Whitelisting of AI Tools

The government must enforce strict AI security audits and only approve AI models that meet data sovereignty and privacy standards.

Conclusion

While AI can revolutionize governance, national security should never be compromised. Allowing Chinese DeepSeek AI into Indian government offices could create serious data leak vulnerabilities. India must take a proactive stance by investing in indigenous AI solutions and enforcing stringent data security measures to safeguard its digital future.



Sunday, February 9

The Impact of Data Quality on AI Output

 


The Influence of Data on AI: A Student's Social Circle

Imagine a student who spends most of their time with well-mannered, knowledgeable, and
disciplined friends. They discuss meaningful topics, share insightful ideas, and encourage each
other to learn and grow. Over time, this student absorbs their habits, refines their thinking, and
becomes articulate, wise, and well-informed.
Now, compare this with a student who hangs out with spoiled, irresponsible friends who engage in
gossip, misinformation, and reckless behavior. This student is constantly exposed to bad habits,
incorrect facts, and unstructured thinking. Eventually, their ability to reason, communicate, and make
informed decisions deteriorates.

How This Relates to Large Language Models (LLMs)

LLMs are like students-they learn from the data they are trained on.
- High-quality data (cultured friends): If an LLM is trained on well-curated, factual, and diverse data,
it develops a strong ability to generate accurate, coherent, and helpful responses.
- Low-quality data (spoiled friends): If an LLM is trained on misleading, biased, or low-quality data,
its output becomes unreliable, incorrect, and possibly harmful.

Key Aspects of Data Quality and Their Impact on AI Output

1. Accuracy - Incorrect data leads to hallucinations, misinformation, and unreliable AI responses.
2. Completeness - Missing data causes AI to generate incomplete or one-sided answers.
3. Consistency - Inconsistent data results in contradicting outputs, reducing AI reliability.
4. Bias and Fairness - Biased data reinforces stereotypes, leading to unethical and discriminatory AI
responses.
5. Relevance - Outdated or irrelevant data weakens AI's ability to provide timely and useful insights.
6. Diversity - Lack of diverse training data limits AI's ability to understand multiple perspectives and
contexts.
7. Security and Privacy - Poorly sourced data may contain sensitive information, leading to ethical
and legal concerns.

 

Conclusion: Garbage In, Garbage Out

Just as a student's intellectual and moral development depends on their environment, an AI model's
performance depends on the quality of the data it learns from. The better the data, the more
trustworthy and effective the AI becomes. Ensuring high-quality data in AI training is essential to
creating responsible and beneficial AI systems.

Understanding Large Language Models (LLMs) - Ajay

 Overview

There is a new discussion on India developing its own Large Language Models (LLMs) and some politician even planned to deploy #DeepSeek in India to be used by government offices. I have received many  have revolutionized artificial intelligence, enabling machines to
understand, generate, and interact with human language in a way that was once thought impossible. These models power applications like chatbots, translation services, content generation, and more. But what exactly are LLMs, and
how do they work?

What Are Large Language Models?

LLMs are deep learning models trained on vast amounts of text data. They use neural
networks-specifically, transformer architectures-to process and generate human-like text. Some
well-known LLMs include OpenAI's GPT series, Google's BERT, and Meta's LLaMA.
### Key Features of LLMs:
- **Massive Training Data**: These models are trained on billions of words from books, articles, and
web content.
- **Deep Neural Networks**: They use multi-layered neural networks to learn language patterns.
- **Self-Attention Mechanism**: Transformers allow models to focus on different parts of the input to
generate contextually relevant responses.

How LLMs Work

1. Training Phase
During training, LLMs ingest large datasets, learning patterns, grammar, context, and even factual
information. This phase involves:
- **Tokenization**: Breaking text into smaller pieces (tokens) to process efficiently.
- **Embedding**: Converting words into numerical representations.
- **Training on GPUs/TPUs**: Using massive computational resources to adjust millions (or billions)
of parameters.
2. Fine-Tuning and Reinforcement Learning
Once pre-trained, LLMs undergo fine-tuning to specialize in specific tasks (e.g., medical chatbots,
legal document summarization). Reinforcement learning with human feedback (RLHF) further
refines responses to be more useful and ethical.
3. Inference (Generation Phase)
When you input a query, the model predicts the most likely next words based on probability, crafting
coherent and relevant responses.

Hands-On Exercise: Understanding Model Output

**Task:**
- Input a simple sentence into an LLM-powered chatbot (e.g., "What is the capital of France?").
- Observe and analyze the response. Identify patterns in the generated text.
- Modify your input slightly and compare results.

Applications of LLMs

LLMs are widely used in various industries:
- **Chatbots & Virtual Assistants**: AI-powered assistants like ChatGPT enhance customer support
and productivity.
- **Content Generation**: Automated article writing, marketing copy, and creative storytelling.
- **Translation & Summarization**: Converting text across languages or condensing information.
- **Programming Assistance**: Code suggestions and bug detection in development tools.

Case Study: AI in Healthcare

**Example:** Researchers have fine-tuned LLMs to assist doctors by summarizing patient histories
and recommending treatments based on medical literature. This reduces paperwork and allows
doctors to focus more on patient care.

Challenges and Ethical Concerns

Despite their potential, LLMs face challenges:
- **Bias & Misinformation**: Trained on human-generated data, they can inherit biases or generate
incorrect information.
- **Computational Costs**: Training LLMs requires expensive hardware and immense energy
consumption.
- **Security Risks**: Misuse of AI-generated content for misinformation or unethical applications.
## Best Practices for Using LLMs
- **Verify Information**: Always fact-check AI-generated content before using it.
- **Monitor Ethical Usage**: Be mindful of potential biases and adjust model outputs accordingly.
- **Optimize Performance**: Fine-tune models for specific tasks to improve accuracy and reduce
errors.

 Future of Large Language Models

Research continues to improve LLMs by enhancing their efficiency, reducing bias, and making them
more transparent. As AI advances, these models will become more integral to various domains,
from education to healthcare and beyond.

Group Discussion: The Role of AI in the Future

**Question:**
- How do you see LLMs shaping different industries in the next 5-10 years?
- What ethical safeguards should be in place to ensure responsible AI use?

Conclusion

Large Language Models represent a significant leap in AI capabilities. Understanding their
strengths, limitations, and ethical implications is crucial for leveraging their potential responsibly. As
technology progresses, LLMs will continue to shape the future of human-computer interaction.

Tuesday, January 21

Prompt Engineering in Artificial Intellegence

AI prompt engineering has taken center stage in many industries since 2022. The reason is that businesses have been able to garner better results with AI using prompt engineering techniques. With the right prompt engineering strategy, the results of all AI and ML applications are improved.

Many individuals have also switched careers due to the high demand for prompt engineers in recent times. Seeing how industries are recognizing the importance of prompt engineering and its potential, it is undeniably one of the fastest-growing fields in the world of AI consulting.

But what behind the hype over AI prompt engineering, and how exactly does it go on to help businesses? Let us find out by taking a closer look at what AI prompt engineering is and its benefits and challenges.

What is AI prompt engineering?

AI prompt engineering is carried out by prompt engineers to leverage the natural language processing capabilities of the AI model to generate better results. Organizations are typically looking to achieve the following objectives with prompt engineering techniques:

  • Improved quality control over AI-generated results
  • Mitigate any biases in the output from the AI model
  • Generate personalized content for very specific domains
  • Get consistent results that are relevant to the expectations of the user.

All-in-all, the meaning of prompt engineering is providing insightful prompts to an AI model to get accurate and relevant results without a lot of corrections or additional prompts. This is to go beyond the natural language processing abilities and give the model exact instructions on how to respond.

This process is mainly done by understanding how the AI model interacts with different prompts and requests. Once the behaviors of the artificial intelligence or machine learning model are clear, prompt engineers can guide AI models with additional prompts that achieve the desired outcome.

Benefits of AI prompt engineering for today's business

Let’s get yourself acquainted with the key prompt engineering benefits:

Enhanced reliability:

After the right prompts have been set, the results generated by the AI model are very predictable and usually fall within your standards for informational accuracy. You could also set up the AI model to only deliver output that complies with content sensitivity guidelines.

Knowing that your results will only fall within the guidelines that you have set by prompt engineering AI models is very reassuring when it comes to reliability. Such a prompt-engineered generative AI can be very useful to publications for rapid content creation.

Faster operations

Establishing your requirements and expectations through AI prompt engineering beforehand can go a long way to speed up your operations in general. The time taken to generate the ideal result is reduced, as the objective is predefined in adequate detail to the AI model.

Additionally, you also spend less time working on errors generated in the final output because prompt engineering fine-tunes the responses of the AI model to replicate the ideal outcome as closely as possible, allowing you to cut down on the time spent on correction and reiteration.

Automate your business workflows
Automate monotonous tasks and make internal processes more efficient.
 

Easier scalability

Since the accuracy and speed of AI-generated output are improved so drastically by prompt engineering, you also get to quickly scale the use of AI models across your organization. Once AI prompt engineers have figured out the ideal prompts, replicating similar results across workforce becomes easy.

Users also can record all interactions with the AI model to understand how it reacts to different prompts, allowing them to refine their understanding of the model and its capabilities. This newfound knowledge can then, in turn, be used to further improve the results that are generated.

Customized AI responses

Perhaps the greatest advantage of using prompt engineering techniques is the ability to get customized results from your choice of AI models. The impact of customized responses can best be observed on bigger AI models such as ChatGPT, where there is a lot of variation in data.

While these larger AI models often generate very generalized and simple results, they can be fine-tuned to deliver responses at a much greater depth. Leveraging AI models in this manner can also deliver completely radical results that wouldn’t be possible unless you prompt engineer AI.

Cost reduction

Upon finding the best AI prompts for their applications, businesses can significantly speed up their AI-driven processes, which reduces the need for constant human intervention. As a result, the costs spent on corrections and alterations are reduced as well.

There is also the environmental cost that is rapidly rising due to the rampant use of powerful AI software that consumes a lot of energy. These reductions in costs may seem miniscule at first, but they quickly add up and help you save a lot of resources in the long run.

Challenges associated with prompt engineering

As fantastic as prompt engineering is, it does come with its fair share of challenges that are left for AI prompt engineers to deal with. The scope of these problems ranges from minor inconveniences to outright failure when generating a response.

Crafting prompts

While the advantages of effective prompting are brilliant, creating these prompts is a completely different ordeal. Finding the perfect prompts takes a lot of trial and error by human prompt engineers as they go through all of their options.

Over generalization

Over generalization is an issue with AI applications that can render them completely useless and occurs when the model provides a highly generalized result to any given query. This is exactly the opposite of what you want when implementing prompt engineering strategies.

While there are many reasons for over generalization, the ones related to prompt engineering are usually due to inadequate training data. Making your query too focused may force the AI model to give you a generalized answer as it lacks the data to give out a detailed response.

Interpretation of results

During the testing phase of new prompt formulations, prompt engineers have to accurately decipher the results delivered by the AI model. The evaluation of the quality of results is a time-consuming task that requires the prompt engineer to be vigilant at all times.

Ensuring that the output quality is up to the mark is only half the battle, as prompt engineers have to understand how they can refine their prompts to gain better results. If the interpretation of the results is incorrect, then the whole efficiency of the model is compromised. This is where the competency of AI prompt engineers is also tested heavily to ensure that they can implement AI in business with ease.

AI model bias

Almost all AI models possess some level of bias when it comes to their generated output. While this is not exactly malicious, it is an inherent part of using massive data sets to train AI models. Because these biases stem from data, there are not a lot of effective ways to mitigate them.

While prompt engineering does eliminate bias if done correctly, it is quite burdensome to identify all the biases that are present within an AI model. Factor in the time to generate new prompts based on the discovery of biases, and you can estimate how long it will take to get the perfect set of prompts.

Changes to data

Unless you have your very own AI model running locally, it is pretty difficult to have any control over the data used in the AI model. In such circumstances, it is very difficult to predict how existing prompts will hold up in the long term with future updates that are made to the AI model.

When additional data is added, the responses to pre-made prompts can be radically different from the expected result. Whenever such updates are made, it usually involves reformulating your entire prompt library to get the best out of AI solutions.

Model limitations

In some cases, the prompts themselves would work well on certain AI models but wouldn’t be very effective on others. This is all because of the different limitations that are encountered in different AI and ML models, which makes AI consulting very difficult.

Since new AI models are being rolled out fairly frequently, it can quickly become overwhelming to adapt your prompt engineering tactics to other models. Some AI models might be downright incapable of generating coherent responses to your prompts altogether.

Who is prompt engineering for?

Much like with any other new solution, some sectors can prove to gain better results than others due to their nature of operations. Knowing how prompt engineering supercharges the generative abilities of AI models, such as AI marketing solutions, the following sectors can benefit the most from prompt engineering:

  1.  Content Creation 
  2. Data Analysis
  3. Finance
  4. Research
  5. E-Commerce
  6. Health Care
  7. Legal Services
  8. Customer Services

Among all the large language model benefits, one is the ability to use prompts that yield better results when compared to generic prompts for AI. Knowing the magnitude of difference that is created in the results, it becomes essential to try and integrate prompt engineering practices. While the advantages of prompt engineering are undeniably great, the investment of time and effort from a prompt engineer may not be worth it if you are in the initial stages of implementing AI solutions in your organization.

In scenarios of integrating AI into regular work processes, it is very important to evaluate the capabilities of the AI model that you choose to use and if you can really benefit from prompt engineering.

 


 

 

Monday, September 2

10 Tips for Creating a Foundation Model for India

As we are discussing creating  Large Language Model (LLM) for India instead of using LLM created by American and Chinese companies I thought of sharing some tips to build a AI with a difference. Here are 10 key tips for building a strong foundation model for India, considering its unique linguistic, cultural, and infrastructural diversity:


 

India

  1. Multilingual Training Data

    • India has 22 official languages and hundreds of dialects. A robust foundation model must incorporate high-quality, diverse, and regionally balanced data across multiple languages.
  2. Bias Mitigation in Data

    • Socioeconomic, gender, and caste-based biases exist in many datasets. Implement bias detection and fairness checks to ensure inclusive AI outputs.


  3. Incorporation of Local Knowledge

    • AI should integrate indigenous knowledge, traditional practices, and cultural references to provide more accurate and contextually relevant responses. 


  4. Handling Low-Resource Languages

    • Many Indian languages lack sufficient digital data. Utilize transfer learning, synthetic data generation, and crowd-sourced datasets to enhance AI capabilities.

  5. Adaptation to Regional Variations

    • Words and phrases can have different meanings across states. Training should include localized NLP models to understand context-specific variations.
  6. Data Quality and Noise Reduction

    • Ensure datasets are accurate, well-annotated, and free from misinformation. Remove noisy or misleading data from social media sources.
  7. Infrastructure and Scalability

    • Indian users access AI on a wide range of devices, from high-end smartphones to basic feature phones. Optimize the model for efficiency and offline accessibility.
  8. Legal and Ethical Compliance

    • Follow India’s data protection laws (such as the DPDP Act) and ensure responsible AI practices to prevent misuse and protect privacy.
  9. Customization for Sectors

    • Train AI specifically for key Indian sectors like agriculture, healthcare, education, and governance to provide domain-specific solutions.
  10. Community Involvement & Open-Source Collaboration

  • Engage with local AI researchers, linguists, and developers to create an open, collaborative model that truly represents India's diversity.

Friday, December 22

Understanding Generative AI and Generative AI Platform leaders

We are hearing a lot about power of Generative AI. Generative AI is a vertical of AI that holds the power to #Create content, artwork, code and much more. Numerous studies have shown this transformative capability has led to numerous benefits across sectors. There has been 40% increase in efficiency in content creationa 75% surge in creative output & an upto 90% growth in the level of automation in certain workflows. It is interesting to study how GAI (Generative AI) is revolutionizing traditional processes and opening doors to innovative possibilities. Generative AI, as you know is a subset of AI that focuses on teaching machines to produce original and creative content. 

While traditionally AI operates based on predetermined rules, Generative AI builds ability to learn from data and generate content autonomously. This technology leverages complex algorithms and neural networks to understand patterns and produce outputs that mimic human-like creativity.

The significance of generative AI lies in its potential to revolutionize industries across the board. From content creation to software development, generative AI tools are paving the way for greater efficiency, creativity, and innovation. Companies are increasingly adopting these tools to streamline their processes, reduce manual efforts, and unlock new possibilities that were once unimaginable.



  • Healthcare: In the medical field, generative AI assists in analyzing medical images, Xrays & scans, diagnosing diseases & predicting patient outcomes. Radiologists using generative AI for image analysis reported above 30% improvement in accuracy in detecting subtle anomalies, ultimately leading to more timely and accurate diagnoses.
  • Software development : Generative AI is transforming the way developers write code. It aids developers by generating code snippets, improving software testing by identifying approximately 30% more defects & even suggesting optimal solutions to coding challenges. These features result in faster development cycles , reduce redundancy and deliver better code quality.                                                                                                                                        
  • Content creationWriters, marketers, and content creators are utilizing Generative AI to automate content generation, effectively streamlining workflows and achieving a remarkable 40% reduction in time spent on content creation. This efficiency boost allows them to focus on higher-level strategic tasks and creativity                                                       
  • Language translation Language barriers are being broken down as generative AI tools translate text and speech in real-time, enabling seamless communication across diverse languages. These tools achieve an amazing 95% accuracy in translation and that is helping foster global collaboration and understanding.                                                                                              
  • Gaming : Developers are using generative AI to create immersive virtual worlds, generate in-game content, & adapt gameplay based on player behavior. Some reports hace found that this real-time adaptation is resulting in upto 50% increase in player engagement and satisfaction, enhancing the overall gaming experience.                                                                                                                                    
  • Finance: For a long time now institutions are leveraging generative AI to analyze market trends, predict stock movement with an impressive 85% accuracy & optimize trading strategies. This technology-driven approach has led to a 25% increase in trading profitability and more informed investment decisions.                                                                                                 
  • Art work & Design:  Artists are exploring generative AI for creating unique visual art, illustrations, and designs, pushing the boundaries of creativity. A study found that incorporating generative AI in the design process led to a remarkable 75% increase in the number of innovative and eye-catching design concepts produced.
  • Music CompositionGenerative AI tools have extended their capabilities to the realm of music composition. These tools analyze existing musical compositions and generate original melodies, harmonies, and rhythms. Musicians and composers can leverage these tools to break creative barriers and discover new musical ideas.
As generative AI continues to advance, its applications across every industry vertical  are expected to become even more sophisticated, further revolutionizing the way we work, create, and interact across various sectors.

You may want to read about the following platforms that are revolutionizing AI
1.ChatGPT
2.Scribe 
3.AlphaCode
4.Gpt-4
5.Bard
6. Cohere Generate
7.Dall-E2
8.Synthesia
9.Flowmachines
10.Claude -  Multi lingual Code Helper
11.ArtBreeder -  Image Creation
12,AI Dungeon - Storyteller
13. DeepCode - Code Reviewer & Suggestions
14. Duet AI - 
15. PaintChainer - B/W Coloring tool


 

 
 

 


Tuesday, May 23

What is Artifical Intelligence infused BPM ?

When I had created Accenture's Point of View (POV) on BPM (comparative study of BPM products or iBPM products as Gartner likes to calls it). We had predicted year on rise of 25% in adoption of BPM by the top 5 industry verticals and year on average productivity improvement of over 10% for enterprises that have implemented matured BPM tools (We had suggested that a mature BPM implementation alone could improve the enterprise productivity by 10%) . Back then Digital Transformation was not a buzz word and we see not just businesses but even governments are aggressively pushing for Digital Transformation. Feels great when I see that today people are well past discussing Need For BPM for Digital Transformation and discussion has moved over to the sensational Artificial Intelligence infused BPM'.
                                                      Artificial intelligence focuses on making already “intelligent” systems capable of simulating human-like decision-making and execution – enabling those systems to perform functions traditionally executed by skilled human professionals – but do so at a much higher level, because of the speed and power available on modern computing platforms.  One needs to understand that for AI TO REALLY HAPPEN the AI software architecture would have to be be similar to our own central nervous system, which controls most of the things we do even though we don’t consciously think about it. So when ever AI matures instead of nerve signals, AI uses algorithms to simulate human behavior.
                                                  Frankly 'what we are implementing today does not have 'human like decision making' capability and that's why we cannot call it AI.  AI is the future and huge investment in research are being done but existing systems do not have intelligence similar to humans because we do not have capability to produce software that has emotional and biochemical aspects of a human brain. What people at large refer to as AI (as of  Jan 2018 ) is actually Machine Learning driven by big data & data mining and which gives insight to improve decision making but there is no Human Like Intelligence as claimed by some companies. Fact remains that the insight from big data aids better and smarter decision making as decision making has definitely improved as we have huge data and technology to process the data at a fast pace.  As such we have been using data insights from historical data to make better business decisions for quite a few years now and if industry decides to call this data insight as AI then we can say AI & BPM are old friends.
                                                                     So if someone tells you he is working on something revolutionary, integrating AI with BPM, you can tell him that AI-BPM is in production for quite sometime - actually for quite a few years (smile)! We did implement Smart Business Process that could be triggered by events from Complex Event Processing framework based on certain event types. We did implement Real Time Big data processing and integrated it with BPM to get insight from Data In Motion and make smarter decisions in real time.  In short we have been doing AI driven BPM for years so don't get stoned by tall claims by some AI-BPM expert!

Point I want to make is that though AI-BPM is not new at the same time AI has been evolving at a fast pace along with ML and we need to continuously  innovate and integrate ML with BPM to get better business insights. What we have already implemented for various industry is a Smart Next Best Action capability that aids a software system to take better decision in real time. Typically NBA is a custom software that uses intelligent insights extracted from big-data processing to aid enterprise decision making and we are using the word intelligent not because system is smart like humans but because it makes decisions based on millions of past records or transactions to recommend the most appropriate action  - something which can almost act like a human not because of intelligence but because of Machine Learning.

Here are some random industry numbers about AI & BPM  -
  • As of today more than 50% of the businesses that are processing Big Data have implemented AI solutions and these businesses have reported more than 50% increase in new business insight from Big Data.
  • AI has helped 50% of implementations to make better business decisions, 20% businesses claim to improved automated communication with help of AI and only 6% businesses claim to have reduced work force by implementing AI
  • Most implemented area for AI is Predictive Analytics (eg Weather Data, Operational Maintenance etc )
  • More than 80% of the implementors claim that AI has improved efficiency & created new jobs
  • Almost all implementors acknowledge that Data Analytic technologies are more efficient when coupled with AI
  •                                                                       

So the intellegence from insight from the huge data is helping make busniness more smart, more proactive, more predictive,  more efficient, more productive and more customer friendly thus opening avenues for new products and expansion of business.

So how is AI or DI changing in the BPM?

  1. Intelligent Recommendations - Continuous machine-learning can  provide relevant recommendation to customer as well as business
  2. Intelligent Marketing - AI can make recommendations to agents or directly to consumers using profile attributes & response behavior and keep learning in real-time, so that the next best offers are relevant to the customer and keep improving over time. Software can help marketing  agent deliver the right recommendations to the right customer at the right time.
  3. Process Automation - Data-insight help reduce workflow inefficiencies, automate human tasks & processes, and reduce repetitive tasks. 
  4. Preferential Treatment to Valued Customers - ML and predictive analytics can estimates a customer’s behavior and guide the agent to both satisfy the customer.
  5. Next Best Action -  NBA helps agents guiding them about the next-best-action to take that will solve a specific problem and lead to higher customer satisfaction and also predicts the sales lead conversion and reduce customer churn
  6. Sales Prediction -  Predictive Analytics helps predicts the likelihood of a lead to close and suggests next best action and strategies to the sales agent.  Predictive engine can identify new sales oppurtunities that may not be outright visible to the team.
  7. Customer Retention - Predictive engine can predict customer churn and also suggest steps required to retain customer.      

Game changing BPM & Data Intelligence/ Artificial Intelligence

What BPM can deliver today is not just efficient and smart process management but a real time business management. The BPM game is changed and BPM is now offering

1) Predictive Business  - Analyze, Sense, Learn, Predict, Act
2) Proactive Recommendation leads to better customer service
3) Reduce Churn by predicting and addressing customer concerns
4) Better value to customer value delivered based on customer insight
5) Better Forecasting by 360 degree view of customer and business
6) Real time enterprise proactively addressing real time events

There are many BPM vendors and vendor analysis by Accenture, Gartner or Forrester can help you decide which BPM vendor has product and features that are right to deliver your solution. Pega, Appian are some of the leading BPM layers of 2018 but there are at least 19 BPM vendors to chose from and you can refer to How to select a BPM (Business Process Management) product? to know how to go about selecting the right BPM product 

Thursday, March 23

Tomorrow of every connected enterprise is Hyper-connected Enterprise

 

Hyper-connected Home

Where do buy sugar from? Pay cash and buy from local grocery store? or order sugar on mobile app? If you are using #BigBasket or #Amazon to buy 1kg Sugar then you should be aware that 100s of computers/devices and at least half a dozen companies,from manufacturer to stockist to cargo delivery to Amazon, all are collaborating over a 'hyper connected network' to get your sugar delivered on time. Which means you are already part of Amazon's Hyper-connected Enterprise where computers, devices and employees create a seamless delivery experience for the customer. Your smart-city is becoming a Hyper-connected city - water, electricity, garbage, emergency services, healthcare, social-welfare everything has either moved or moving to Hyper-connected network of enterprises. You may want to check how companies like Libelium World are helping monitor environment in smart-cities in Spain.

Take another example of entertainment industry. How do you watch your favorite sport? In your living room on large screen TV, live stream on your mobile/tablet/laptop or do you use a dedicated app like Hotstar? Most likely you use all these medium to watch a game of live cricket. Entertainment is being delivered today everywhere and anywhere you choose to be. It is possible because #Hotspot and #Accenture have created a Hyper-connected environment to deliver the content on all medium to deliver content even when you have poor network connectivity. Energy sector suffered huge losses because of electricity leakage and was one of the first to adapt Smart Meter & Smart Energy distribution system. A decade back #USA #Florida based Duke Energy claimed that its electrical system have the capacity to automatically detect, isolate, and reroute power when a problem occurs.
 

Your enterprise too is live 24/7 because the customers expect you to provide service not just on phone between 9 to 5 but at a time that is convenient to them and over a medium of their  choice. The epidemic has only accelerated seamless service delivery over digital medium in a transparent, reliable and secure manner. The enterprise has evolved in many ways and new business models of collaborations are in play to deliver services to the consumer.The digital enterprise is evolving to a new paradigm and it is called Hyper-connected Ecosystem.The biggest change that we are witnessing today is not just the mandate for extraordinary agility and business resilience but also a drastic shift in consumer demand. An evolution of the digital ecosystems that are driving businesses today required organizations, people, devices connecting seamlessly by leveraging an effective hyper-connected ecosystem.

Some leaders talk of Hyperconnected Enterprise as the next phase of Digital enterprise what they don't realize is the evolution had already started years back and you are already on a Hyper-connected Network.The core idea behind Digital Enterprise was always to deliver a services over a connected ecosystem and pandemic has  necessitated innovation to make a business impact and ensure sustainability. The focus today is to enable new innovative open ended technology solutions that are seamless and integration ready to reach the masses. 
 
Blocks of Hyper-connected Enterprise

 
Some enterprises may already have a well designed information system that requires minimal work to become hyper-connected enterprise. The remaining enterprises have to redesign the process ,underlying systems and imbibe the culture of digital dexterity.  The enterprise vision has to change along with the employee mindset to adapt and embrace emerging technologies along with existing technologies to achieve better business outcome and to deliver new products. Not an easy order as it requires enterprise and employees to learn new skills as well as change in culture. Every industry is either moving to become a hyper-connected enterprise or they will have to quickly do the transformation if they intend to stay relevant and compete with the competition who have adopted the new way of doing business. 

The journey

Today enterprises need to be connected to deliver value. From lead generation to fulfillment to customer support all the processes have to be digital.In the next post we will discuss some examples of Everywhere Digital Enterprise. Until everything is connected to everything else...




 

Thursday, January 19

The Evolution of Software Integration

 
Every successful enterprises depend s heavily on underlying software applications and communication between the applications. The problem is that, as time goes by, enterprises invariably end up with software created with disparate technologies and built by several vendors.

The number of software applications varies according to the size of the organization. According to a research small businesses use an average of 10 to 22 applications, and in large enterprises, this number rises to an amazing 700 to 1000 software applications!

The Motivation for Software Integration

All these disparate software applications often need to work together, and this is where software integration comes in. I see various motivations for software integration when I talk to business owners and IT managers. They usually want to achieve one of the following:

  • Produce a unified set of functionalities, for example, a unified customer support system
  • Increase productivity by reducing the need to switch between applications
  • Have easier user adoption, especially if one of the software applications being integrated is new
  • Enable data analytics by getting data from multiple sources
  • Automate data entry – getting data from another application is less costly than manual data entry

In the early stages of software integration, one of the main issues would be that everything was proprietary and closed. You would buy an application, and all the information you put in it was accessible only from within that application. Not to mention that often it would be available on a single machine or on a limited set of machines. If you wanted to, for example, access that information from another software application, you were into trouble.

But when there is a will, there is a way, and so software integration started. The software integration challenges were initially addressed by implementing in-house integration solutions that used ad hoc and proprietary protocols themselves.

Eventually, with the addition of more and more software systems and the wider spread of internal and external networks, home-grown solutions were no longer adequate. Software integration evolution had reached a new level. The motto "no system is an island" became common, but there was still no standard solution to the problem.

Software Integration Evolution & APIs

Over several decades, enterprise integration technologies leveraged one or more integration styles. The earliest integration style was the Remote Procedure Call (RPC) style, used in single-language environments and very popular in the 80s and early 90s. Later, multi-language systems such as CORBA appeared. In these, an explicit Interface Definition Language (IDL) was used, so that the caller and callee could be implemented in different programming languages and even different platforms.

In the end, the use of Application Programming Interfaces, best known as APIs, became the rule. APIs emerged to expose business functionalities provided by one application to other applications. APIs exist so you can reuse them – the concept has been, from the beginning, that multiple applications could consume the same API. The idea was that developers could rely solely on the API to access an application's functionality without worrying about their implementation details.

What Is API Management & Why You Need It

When developers start using an API, they hope that the API is a contract that will not change. However, APIs are susceptible to the same environmental pressures to change that all software systems face. They are often upgraded, reworked, and sometimes refactored. When this happens, finding actual points of change in the API and making things work again is painstaking work for the developer. This is why API lifecycle management appeared.

Developers also hope they will have to work with as few APIs as possible. This is because each new API represents a new learning curve and involves time and effort. Moreover, when you come across the upgrade problems we mentioned, the developer knows it will help to have few APIs and few inter-dependencies - not to fall into spag

Spaghetti Style Software Integration

The thing is that this is not always up to the developer, as the need to integrate different software systems grows.

This is what API management is: API lifecycle management for multiple APIs.

As a result of sheer demand, API management using middleware has emerged as a way of using APIs and getting their advantages, while avoiding their known problems. Please note we are looking at API management from the API consumer perspective. If you look at it as an API producer, then the focus will be different.

API Management Tools As Integration Software Solutions - An Example

The new middleware technologies have eliminated the need to call APIs directly. Instead, the developer writes SQL in their new or legacy code, then use prebuilt connectors to translate standard SQL syntax in that code into API calls. These calls retrieve the needed information from the target system (4). This can work for retrieving data (SELECT) or to input/change it (INSERT, UPDATE, DELETE).

Connect Bridge API management

The middleware acts as a translator that speaks all the API variants the developer needs, translating them into ANSI standard SQL syntax that the developer knows well and can use together with his favorite programming language, such as Python, Java, or C#, just to name a few.

By using such translating middleware, the developer no longer needs to learn a new programming language or gain expertise in the target system API. This makes all the difference, dramatically reducing the time and effort necessary to integrate software.

Using SQL Connector, the developer has two options:

  • he can build his own custom integration software in the programming language of his choice or
  • he can start from the source code of any software from the past 40 years.

In both cases, completing the integration will require few lines of code and be quite straightforward.

Using such middleware also eliminates the need to redo your code when you upgrade the target system or its APIs. The middleware company itself will handle all the maintenance efforts. It is now their job to guarantee forward compatibility (and sometimes backward compatibility too).

Ultimately, API management gives enterprises greater flexibility by enabling them to go for the software integrations they need while shielding them from the negative aspects and not compromising on security, e.g. maintaining GDPR compliance.

Last word

Software integration has long been a pain point for businesses, often leading companies to either maintain their legacy systems for longer than they should or fork over large sums of money on developers to migrate to the latest and greatest.

Fortunately, with software integration evolution, you can easily solve current integration challenges and prepare companies for the future by using today's technology of API management middleware. Whether to simply share data between systems, to modernize legacy systems, or to meet complex requirements, endless integration possibilities are at your fingertips once you start using API middleware.

Wednesday, August 31

The Secrets to DevOps Success - 2

DevOps implementation strategy is key focus of most organizations as they embark on Digital Transformation journey. Though it sounds like a quite a straight forward initiative to automate the software delivery process it has many challenges as I have discussed in the past posts. 

DevOps Market size exceeded $7 billion in 2021 and is expected to grow at a CAGR of over 20% from 2022 to 2028 to a value of over $30 billion. In 2021, 65% of the DevOps market’s value in the USA was made up of DevOps solutions (tools) with 37% accounted for by services. By 2028, around 55% of the market’s value is forecast to be accounted for by DevOps services and the remaining 45% by tools.

In 2001 when I implemented my 1st development process automation it was more about automating redundant manual processes to save time and avoid manual errors in the build and release process. We were a small team delivering a small project for a US client , we used to face failure during every release and it was very embarrassing for the entire team to attend the post release meeting. All we wanted to do was a smooth bug free release without spending all night at the server machines. We automated our build and release process and unknowingly we started working closely together to ensure all the issues we faced in the past did not occur again. We collobarated across teams, we stopped blaming other teams, we learned every step of the code/build/test/release/configure/deploy process ,we automated manual tasks and we monitored every step of the release process. Soon we started doing perfect code drops for every release and we started leaving office together to enjoy post release drinks. We were not doing DevOps but experienced a cultural change and we were working as one team.

Over the last few years we are recommending DevOps to our clients as the right way to do the release thing for their business transformation or the digital transformation journey. What we have observed is that in spite of a large number of new tools, dashboards and on demand infrastructure it is still a big challenge to implement a successful DevOps process in an organization. Lets take a quick look at some of things that can help implement a successful DevOps process.

                                                      To implement technology strategically, businesses need to start by creating a cultural shift toward the principles of DevOps. It starts with people and processes, and then products. You cannot simply throw a tool or even multiple tools at the problem and hope that it will be solved. To transform your business, you need to embrace velocity: making incremental changes, delivering small iterations and going faster. This often means disrupting your own business and cannibalizing your existing offerings before disrupting the market. There are a few key elements of DevOps culture that must be adopted before you begin thinking about your product toolkit.

1) Empower teams by embracing collaboration 

Encouraging collaboration is one crucial way to empower employees. By keeping all stakeholders involved in the process, employees can communicate impact in real time, keeping the execution process moving along. Collaboration enables product manager, development, QA, security and operations teams to work together directly instead of waiting for handoffs. The values of diversity, inclusion and belonging are fundamental to creating a culture of collaboration within your organization. Collaboration across teams, across levels brings in multiple perspectives, and by ensuring that each perspective has a say we invite  innovative ideas, empowered teams and smarter, more informed decision-making. The culture of collaboration has to be driven down the hierarchy by the top leadership leading by example and rewarding collaborators. If collaboration is not one of the KPI for management leaders so far it is time to embrace it now.

2) Iteration Planning

You can go faster by breaking things down into smaller pieces. The smaller we split things up, the smaller the steps we take, the faster we can go.' Smaller iteration are better because they take less time, get delivered faster and there is lower risk and have quick turnaround time encouraging people to be more creative.  I remember my mother telling me to take small bites, chew well and the food will show on you, it worked there and it works everywhere. Encouraging iterations is also a step towards moving away from the stagnant waterfall mentality to developing an agile calculated risk taking attitude.

3) Focus on results

Employees should be acknowledged for what they accomplish and complete, not how long it took them or where they worked. . Create a culture where team members feel trusted to structure their own days and do what it takes to get the results that customers require. Start by finding simple solutions to the problem instead of flashy complicated ones. 

It is impossible to transform a business without setting the mood with collaborative culture.Start by finding ways for collaboration in areas where currently you have silos, iterations where there is stagnancy and efficiency where there are lags


Monday, August 29

The Secret to DevOps Success

Gartner predicted that through 2022 75% of DevOps initiatives will fail to meet expectations due to issues around organizational learning and change and in 2021 Tech Radar Survey indicated 80% of the DevOps initiatives failed to achieve desired goals - mind you project this is the percent of projects that failed to meet the desired goals and expectations. In other words, people-related factors tend to be the bigger challenge while implementing DevOps as compared to implementation technology/tools challenges.

DevOps delivers Maximum value when aligned to customer value

It has been observed that organizations often launch DevOps efforts with insufficient consideration of business outcomes and without clarity of goals. I&O leaders need to ensure that staff and customers connect with the term "DevOps," and the value it will bring, prior to introducing the initiative.

Organizations should use marketing to identify, anticipate and deliver the value of DevOps in a manner that makes business sense. "Leaders must seek to refine their understanding of customer value on an ongoing business to evolve capabilities and further enable organizational change,"

DevOps fails when right team members & organizational change are not managed

In another Gartner 2017 Enterprise DevOps Survey, 88% of respondents said team culture was among the top three people-related attributes with the greatest impact on their organization's ability to scale DevOps. In 2020 TechRadar did a similar survey and over 90% CIOs responded that their priority was to build DevOps culture However, organizations overlook the importance of getting the right mix of staff on board with the upcoming DevOps change and instead focus efforts on DevOps tools.

Tools are not the solution to a cultural problem

It sounds repetitive but I still need to reiterate that "Tools are not the solution to a cultural problem," Organization have to Identify candidates with the right attitude for adopting DevOps practices. Individuals who demonstrate the core values of teamwork, accountability and lifelong learning will be strong DevOps players.

Lack of collaboration affects success of DevOps

Successful DevOps efforts require collaboration with all stakeholders. More often than not, DevOps efforts are instead limited to I&O. Organizations cannot improve their time to value through uncoordinated groups or those focusing on I&O exclusively.

Break down barriers and forge a team-like atmosphere. Varying teams must work together, rather than in uncoordinated silos, to optimize work. "This might start with seeking out an executive who can carry the teams and champion the effort,"

Trying to do too much too quickly

It is important to realize that a big-bang approach — in other words, launching DevOps in a single step — comes with a huge risk of failure. DevOps involves too many variables for this method to be successful in a large IT organization.

Is is recommended to use an incremental, iterative approach to implement DevOps to enable the organization to focus on continual improvements and ensure that all groups are collaborating. Is is recommended starting with a politically friendly group to socialize the value of DevOps and reinforce the credibility of the initiative.

Unrealistic expectations of DevOps

Similar to struggling with grounding DevOps initiatives in customer value, a disconnect exists in many organizations between expectations for DevOps and what it can actually deliver.

"Expectation management and marketing are continuous and not a one-time affair"

Manage expectations by agreeing on objectives and metrics. Use marketing to identify, anticipate and satisfy customer value in an ongoing manner. Expectation management and marketing are continuous efforts, not a one-time affair.The bottom line is unless entire organization understand develops and appreciates the benefit of DevOps and take efforts to collaborate and work to bring a cultural change across Development, Testing and Operations teams, DevOps will not be successful.

Sunday, June 5

Why todays digital enterprises need DevOps?

There was a time in pre-digital era when a New business idea would disrupt the business landscape. Take example of the restaurant industry that we all are familiar with. Let's assume that couple of decades back  the 1st restaurant, with a seating capacity of 10 people decided that it could grow its business if it started taking home delivery orders on phone. The restaurant increased its business from 10 orders/hour to 30 orders/hour - mind you, there is no software used, just a great business idea implemented using the telephone. Recently another disruptive idea that leverages software changed the food business landscape. I am talking about Cloud Kitchen that has disrupted the food business over the last 4 years. The same restaurants that had seating capacity of 10 people could now outsource its kitchen to 10 cloud kitchens across the city and they started servicing 300 orders every hour. If you do not know what is Cloud Kitchen do read about it it most interesting software driven business disrution that many people don't know about.                                                                                                                                              Today businesses continue to be disrupted not just by a new business idea but also by software agility and innovation. Software defined disruption like Mobile apps or Cloud Kitchens have changed the food business landscape and continues to drive tremendous business value like never before.  Is it important to understand that as software is disrupting the industry, every enterprise is turning into a software company. As a software company all these enterprises need a seamless SDLC (software delivery lifecycle) process integrated with testing tools,deployment tools and application and business monitoring tools along with team that can promptly support any issue in the software delivery with a short turnaround time.

The main driver to adopt devops comes from the need of enterprise to innovate and accelerate the business. Unless the new ideas are implemented in the software and shipped fast by a unified capable team the business does not get the desired business outcome.

Key points to note about business and need for devops are as follows-

Businesses develop software for a reason: to get business value

Today every Businesses is either Digital Businesses or in process of Digital Transformation

Businesses will continue to be disrupted by software innovations & agility for example Mobile Apps

Disruption like Cloud Kitchens are changing landscape & deliver tremendous business value

Software disruption also means that enterprises are becoming software companies.

So the main driver for DevOps is the need for enterprise to innovate and accelerate the business

  



Insights on India’s current AI initiatives

India is rapidly advancing in the field of artificial intelligence (AI), driven by both government initiatives and private sector investment...