Monday, August 11

Agentic AI Mastery: From Zero to Pro — The Brain of the Agent (Module- 3)

 

📌 Module 3: The Brain of the Agent — LLM Fundamentals

1. Theory

Large Language Models (LLMs) are at the heart of most modern AI agents.
They process text, reason about it, and generate responses that guide the agent’s actions. 
Kirk Borne على X: "#infographic List of large Language Models for ...

 

Key Concepts

  • Tokenization → Breaking text into smaller units the model can understand.
  • Embeddings → Vector representations of text for semantic understanding.
  • Context Window → The limit on how much information the LLM can “see” at once.
  • Prompt Engineering → Crafting instructions to get desired outputs.

LLM Types

  • Local LLMs → Run entirely on your machine (e.g., LLaMA, Mistral)
  • Cloud-based LLMs → Accessed via APIs (e.g., OpenAI GPT-4, Anthropic Claude)

2. Step-by-Step Windows Setup (For This Module)

  1. Install Transformers Library

2.  pip install transformers

3.  pip install sentence-transformers

  1. Download a Small Local Model (for quick testing)

5.  from transformers import pipeline

6.  gen = pipeline("text-generation", model="distilgpt2")

7.  print(gen("Agentic AI is", max_length=20))

  1. Set Up an Embeddings Model

9.  from sentence_transformers import SentenceTransformer

10.model = SentenceTransformer('all-MiniLM-L6-v2')

11.embeddings = model.encode("Agentic AI learns and acts")

12.print(embeddings[:5])


3. Examples

Example 1 — Few-Shot Prompt for Classification

from transformers import pipeline

classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")

print(classifier("Build an agent that schedules meetings", candidate_labels=["Productivity", "Gaming", "Education"]))

Example 2 — Summarizing a News Article

summarizer = pipeline("summarization", model="facebook/bart-large-cnn")

print(summarizer("Artificial Intelligence is transforming industries...", max_length=40, min_length=10))

Example 3 — Semantic Search Using Embeddings

from sklearn.metrics.pairwise import cosine_similarity

docs = ["AI helps businesses", "Cooking pasta", "Agentic AI automates tasks"]

query = "automation in AI"

doc_embeddings = [model.encode(doc) for doc in docs]

query_embedding = model.encode(query)

scores = cosine_similarity([query_embedding], doc_embeddings)

print(scores)


4. Exercises

  1. Create a prompt that classifies user queries into “Tech” or “Non-Tech”.
  2. Build a summarizer for PDF documents.
  3. Use embeddings to find the most relevant FAQ answer to a user’s question.

5. Best Practices

  • Always test with small models before switching to expensive ones.
  • Optimize prompts for clarity and structure.

6. Common Mistakes

  • Sending too much data beyond the context window → truncated outputs.
  • Using embeddings from one model with another model for similarity search.

7. Quiz

  1. What is the purpose of embeddings in LLMs?
  2. What’s the difference between few-shot and zero-shot classification?
  3. Why is the context window important?

Agentic AI Mastery: From Zero to Pro — A Complete Guide (Module-2)

📌 Module 2: Your AI Workbench — Setting Up on Windows

1. Theory

A well-configured environment is the foundation for building and running Agentic AI applications efficiently.
On Windows, this means:

  • Installing the right tools (Python, Git, IDEs)
  • Managing virtual environments
  • Installing dependencies
  • Setting up local or cloud-based LLMs

A proper setup ensures reproducibility — you and others can run the same code with minimal issues.


Why Windows Setup Matters for AI Development

  • Many developers in enterprises use Windows by default.
  • With WSL2 or native Python, you can still run modern AI frameworks.
  • Windows allows both local LLM execution and cloud API integration.

2. Step-by-Step Windows Setup (For This Module)

  1. Install Python 3.10+
  2. Install Git
  3. Install VS Code
  4. Install Ollama for Local LLMs

o   ollama run llama2

  1. Create Virtual Environment

6.  python -m venv agentic_env

7.  .\agentic_env\Scripts\activate

  1. Install Libraries

9.  pip install langchain openai requests wikipedia python-dotenv


3. Examples

Example 1 — Running a Local LLM

  • Run:

·       ollama run llama2

  • Type:

·       What is Agentic AI?

Example 2 — Testing LangChain Installation

from langchain.prompts import PromptTemplate

template = PromptTemplate(input_variables=["name"], template="Hello {name}, welcome to Agentic AI!")

print(template.format(name="Ajay"))

Example 3 — Using Hugging Face Transformers Locally

from transformers import pipeline

qa = pipeline("question-answering", model="distilbert-base-uncased-distilled-squad")

print(qa(question="What is AI?", context="AI stands for Artificial Intelligence."))


4. Exercises

  1. Install a different local model in Ollama.
  2. Create a Python script that checks if all dependencies are installed.
  3. Set up a VS Code workspace for an Agentic AI project.

5. Best Practices

  • Keep your virtual environment per project.
  • Use requirements.txt to track dependencies.

6. Common Mistakes

  • Forgetting to activate the venv before installing packages.
  • Using system Python instead of project-specific venv.

7. Quiz

  1. Which command activates a virtual environment in PowerShell?
  2. Name two benefits of using Ollama locally.
  3. Why should you keep a requirements.txt file?



Agentic AI Mastery: From Zero to Pro — The Brain of the Agent (Module- 3)

  📌 Module 3: The Brain of the Agent — LLM Fundamentals 1. Theory Large Language Models (LLMs) are at the heart of most modern AI ag...