Curriculum Completion Validation
Ep 1 — Unlocking Knowledge for Agents: solution verification
Ep 1 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/1-Foundry-IQ-Unlocking-Knowledge-for-Agents/cookbook/foundry-iq-cookbook.ipynb
Ep 1 — final output screenshot
Ep 2 — Building the Data Pipeline with Knowledge Sources: solution verification
Ep 2 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/2-Foundry-IQ-Building-the-Data-Pipeline-with-Knowledge-Sources/cookbook/foundry-iq-cookbook.ipynb
Ep 2 — final output screenshot
Ep 3 — Querying the Multi-Source AI Knowledge Bases: solution verification
Ep 3 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/3-Foundry-IQ-Querying-the-Multi-Source-AI-Knowledge-Bases/cookbook/foundry-iq-cookbook.ipynb
Ep 3 — final output screenshot
Episode Insights & Key Takeaways
Episode 1:
Foundry IQ provides a unified knowledge layer behind a single API endpoint over heterogeneous enterprise data, with built-in Entra-ID permissions and Purview sensitivity labels and so security follows the agent without custom code. The three primitives (knowledge sources, knowledge bases, agent queries) decouple data ingestion from agent logic.
Episode 2:
Knowledge sources split into passive (Blob, OneLake, SharePoint and indexed ahead of time with Azure Content Understanding for layout-aware enrichment) and remote (web, MCP and accessed live). This hybrid model means structured + unstructured + public web all sit behind one KB without separate retrieval pipelines.
Episode 3:
Agentic retrieval is iterative, not single-shot and a planner decomposes the query, an SLM evaluates whether to exit or iterate. Benchmarks show ~36% higher answer quality than single-shot RAG, with retrieval_reasoning_effort (minimal/low/medium) as the cost/quality dial. Constraint I hit: messages input requires low or medium and minimal disables the planner needed to interpret chat history.
Challenges or feedback
Encountered three issues while running the cookbooks:
Episode 2, blob container setup: hit a "container does not exist" error initially. Suggest the cookbook README clarify which container the deploy template creates and where to find its name in Azure Portal.
Episode 3, Step 2 first cell: pairing KnowledgeRetrievalMinimalReasoningEffort() with a messages array fails. Per Azure agentic-retrieval docs, messages is only supported for low or medium effort. Suggested fix: change to KnowledgeRetrievalLowReasoningEffort() or document the constraint inline.
Episode 3, Step 5 comparison loop: errors with "the specified type '' is not valid. valid types are: semantic" because SearchIndexKnowledgeSource is created without a reranker config, but Azure requires one for low/medium effort. Suggested fix: add SemanticKnowledgeSourceReranker(semantic_configuration_name="semantic_config") to SearchIndexKnowledgeSourceParameters in cell 5.
Badge form confirmation
Curriculum Completion Validation
Ep 1 — Unlocking Knowledge for Agents: solution verification
Ep 1 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/1-Foundry-IQ-Unlocking-Knowledge-for-Agents/cookbook/foundry-iq-cookbook.ipynb
Ep 1 — final output screenshot
Ep 2 — Building the Data Pipeline with Knowledge Sources: solution verification
Ep 2 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/2-Foundry-IQ-Building-the-Data-Pipeline-with-Knowledge-Sources/cookbook/foundry-iq-cookbook.ipynb
Ep 2 — final output screenshot
Ep 3 — Querying the Multi-Source AI Knowledge Bases: solution verification
Ep 3 — fork URL
https://github.com/fareedhayat/iq-series/blob/main/3-Foundry-IQ-Querying-the-Multi-Source-AI-Knowledge-Bases/cookbook/foundry-iq-cookbook.ipynb
Ep 3 — final output screenshot
Episode Insights & Key Takeaways
Episode 1:
Foundry IQ provides a unified knowledge layer behind a single API endpoint over heterogeneous enterprise data, with built-in Entra-ID permissions and Purview sensitivity labels and so security follows the agent without custom code. The three primitives (knowledge sources, knowledge bases, agent queries) decouple data ingestion from agent logic.
Episode 2:
Knowledge sources split into passive (Blob, OneLake, SharePoint and indexed ahead of time with Azure Content Understanding for layout-aware enrichment) and remote (web, MCP and accessed live). This hybrid model means structured + unstructured + public web all sit behind one KB without separate retrieval pipelines.
Episode 3:
Agentic retrieval is iterative, not single-shot and a planner decomposes the query, an SLM evaluates whether to exit or iterate. Benchmarks show ~36% higher answer quality than single-shot RAG, with retrieval_reasoning_effort (minimal/low/medium) as the cost/quality dial. Constraint I hit: messages input requires low or medium and minimal disables the planner needed to interpret chat history.
Challenges or feedback
Encountered three issues while running the cookbooks:
Episode 2, blob container setup: hit a "container does not exist" error initially. Suggest the cookbook README clarify which container the deploy template creates and where to find its name in Azure Portal.
Episode 3, Step 2 first cell: pairing KnowledgeRetrievalMinimalReasoningEffort() with a messages array fails. Per Azure agentic-retrieval docs, messages is only supported for low or medium effort. Suggested fix: change to KnowledgeRetrievalLowReasoningEffort() or document the constraint inline.
Episode 3, Step 5 comparison loop: errors with "the specified type '' is not valid. valid types are: semantic" because SearchIndexKnowledgeSource is created without a reranker config, but Azure requires one for low/medium effort. Suggested fix: add SemanticKnowledgeSourceReranker(semantic_configuration_name="semantic_config") to SearchIndexKnowledgeSourceParameters in cell 5.
Badge form confirmation