Semantic Kernel Python Vector Search with Cosmos DB for MongoDB

This article serves as a fundamental guide for utilizing Semantic Kernel and Python with Azure Cosmos DB for MongoDB. It covers setting up Python, and loading data into Azure Cosmos DB. The process facilitates seamless Semantic Kernel integration and offers powerful language capabilities for developers.

This article will guide you through a straightforward tutorial on using Semantic Kernel and Python to execute vector searches on Azure Cosmos DB for MongoDB vCore. The tutorial is based on the LangChan solution: LangChain Vector Search with Cosmos DB for MongoDB and serves as a fundamental guide for utilizing Azure Cosmos DB for MongoDB as a memory store with Semantic Kernel. Please check-out Enhance Customer Support with Semantic Kernel and Azure OpenAI for guide using Semantic Kernel and C#.

The extensibility of the Semantic Kernel provides developers with connectors and plugins that can be utilized independently to load data and demonstrate a vector search. However, the true potential of the Semantic Kernel lies in combining these components. This article serves as an introductory guide to leveraging Semantic Kernel and Python in conjunction with Azure Cosmos DB for MongoDB. By following the instructions and harnessing the functionalities offered by Semantic Kernel, one can obtain valuable insights into implementing vector searches in their projects.

In this article

Prerequisites

What is Azure Cosmos DB for MongoDB vCore

Azure Cosmos DB for MongoDB vCore allows users to harness the benefits of a fully managed MongoDB-compatible database within the Azure environment. This integration streamlines the deployment and management of MongoDB databases on the Azure cloud platform, ensuring exceptional performance and reliability for applications requiring MongoDB compatibility.

Key benefits:

  • Seamless integration: Build modern apps with familiar MongoDB architecture, plus enjoy native Azure integrations and unified support.
  • Cost-effective: Pay only for resources used with scalable pricing tiers and optional high availability.
  • Highly flexible: Easily scale vertically and horizontally, with automatic sharding for large databases.
  • Effortless migration: Migrate from MongoDB to vCore with ease, thanks to full feature compatibility.

What is Semantic Kernel

Semantic Kernel is a Software Development Kit (SDK) provided by Microsoft at no cost. Moreover, its main purpose is to enable developers to seamlessly create AI agents by integrating their existing code with AI models obtained from platforms such as OpenAI, Azure OpenAI, and Hugging Face.

The provided SDK enables developers to create agents with the ability to respond to inquiries, automate tasks, and perform various functions. As an AI orchestration layer, Semantic Kernel facilitates the integration of AI models and plugins to create new user experiences.

Key Features:

  • Open-Source SDK: It’s open-source, meaning it’s freely available for anyone to use and modify.
  • Integration with AI Models: Developers can easily combine their existing code with AI models from platforms like OpenAI, Azure OpenAI, and Hugging Face.
  • Agent Capabilities: Allows developers to create AI agents that can answer questions, automate processes, and perform various tasks.
  • AI Orchestration Layer: Acts as a layer that orchestrates different AI models and plugins to create new user experiences.
  • Extensibility: Provides flexibility and room for expansion, enabling developers to customize and enhance functionality as needed.
  • Integration with AI Services: Seamlessly integrates with various AI services, making it easy to leverage different tools and resources in AI development.

Setting Up Python

In this tutorial, we will employ Semantic Kernel and Python to import vectors into Azure Cosmos DB for MongoDB vCore and perform a similarity search. Throughout the development and testing of this guide, Python 3.11.4 was utilized.

First setup your python virtual environment in your workspace.

python -m venv venv

Next, we will create a requirements file in your workspace called ‘requirements.txt‘ and add the following python packages.

pymongo==4.6.3
python-dotenv==1.0.1
openai==1.16.2
semantic-kernel==0.9.8b1

Activate your environment and install dependencies in your workspace.

venv\Scripts\activate
python -m pip install -r requirements.txt

Create a file, named ‘.env’, to store your environment variables.

OPENAI_API_KEY="**Your Open AI Key**"
CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
DATABASE_NAME="research"
COLLECTION_NAME="resources"
Environment VariableDescription
OPENAI_API_KEYThe key to connect to OpenAI API. If you do not possess an API key for Open AI, you can proceed by following the guidelines outlined here.
CONNECTION_STRINGThe Connection string for Azure Cosmos DB for MongoDB vCore (see below)
DATABASE_NAMEName of Azure Cosmos DB for MongoDB Database – default to “research”
COLLECTION_NAMEName of Azure Cosmos DB for MongoDB Collection – default to “resources”
.env file variables

Azure Cosmos DB for MongoDB vCore Connection String

The environment variable CONNECTION_STRING from the ‘.env’ file includes the Azure Cosmos DB for MongoDB vCore connection string. You can retrieve this value by accessing the Azure portal for your Cosmos DB instance and selecting “Connection strings.” It will be necessary to input your username and password in the designated fields.

Azure Portal - Cosmos DB for MongoDB vCore - Connection Strings
Azure Cosmos DB for MongoDB vCore – Settings – Connection Strings

Loading Azure Cosmos DB with Semantic Kernel

Semantic Kernel’s SemanticTextMemory class will be utilized to load our sample JSON data in the memory store, Azure Cosmos DB for MongoDB. SemanticTextMemory offers functionalities for saving, retrieving, and searching text information in a semantic memory store. Although this article focuses on Azure Cosmos DB as the memory store, Semantic Kernel allows for seamless replacement of the memory store with minimal code adjustments.

Download sample Dataset

To start off, you will need to download and extract the sample JSON dataset. The dataset comprises excerpts from the eighth edition of ‘Rocket Propulsion Elements’ and is intended for use in our hands-on vector search.

Download and extract into your working directory: Rocket_Propulsion_Elements.zip.

Loading Memory Store

With the sample data downloaded and extracted, we can then proceed to load it into Azure Cosmos DB for Mongo DB using Semantic Kernel and Python. Additionally, as we aim to perform a vector search, we will need to create text embedding using an OpenAITextEmbedding service.

Semantic Kernel Service

One of the main features of Semantic Kernel is its ability to add different AI services to the kernel. This allows you to easily swap out different AI services to compare their performance and to leverage the best model for your needs. We will be leveraging the OpenAITextEmbedding to created embeddings using OpenAI for our Azure Cosmos DB vector store:

kernel = Kernel()

kernel.add_service(
   OpenAITextEmbedding(
      service_id="text_embedding",
      ai_model_id="text-embedding-ada-002",
      api_key=environ.get("OPENAI_API_KEY")
        )
)
Service PropertyDescription
Service IDUniquely identify service, used for recall later in code
AI Model IDEach service has a list of models available, we are using the OpenAI embedding service with the model ‘text-embedding-ada-002’
API KeyThe key to connect to OpenAI API. If you do not possess an API key for Open AI, you can proceed by following the guidelines outlined here.
OpenAITextEmbedding Service Properties

Semantic Kernel Plugin

The next key element we’re going to explore in Semantic Kernel is the concept of plugins. These plugins serve as the foundational building blocks of Semantic Kernel. Moreover, they are capable of interacting with plugins in ChatGPT, Bing, Microsoft 365, and with a memory store such as Azure Cosmos DB. In this article, we will be adding a plugin to the memory store AzureCosmosDBMemoryStore that enables vector search integration with our Azure Cosmos DB instance.

First, we must create the Azure Cosmos DB memory store:

 store =  await AzureCosmosDBMemoryStore.create(
                cosmos_connstr=COSMOS_CONNECTION_STRING,
                cosmos_api="mongo-vcore",
                database_name=COSMOS_DATABASE,
                collection_name=COSMOS_COLLECTION,
                index_name=index_name,
                vector_dimensions=vector_dimensions,
                num_lists=num_lists,
                similarity=similarity,
                kind=kind,
                application_name="",
                m = 16,
                ef_construction = 64,
                ef_search = 40
    )
Memory Store ParametersValue
cosmos_connstrThe Connection string for Azure Cosmos DB for MongoDB vCore from the environment variables
cosmos_apiUse “mongo-vcore”
database_nameName of Azure Cosmos DB for MongoDB Database from the environment variables
collection_nameName of Azure Cosmos DB for MongoDB Collection from the environment variables
index_nameembedding index – “vectorSearchIndex”
vector_dimensions1536 – text-embedding-ada-002 uses a 1536-dimensional embedding vector
num_lists1 – for a small demo, you can start with numLists set to 1 to perform a brute-force search across all vectors
similarity CosmosDBSimilarityType.COS
kindCosmosDBVectorSearchType.VECTOR_IVF
application_namenot applicable for this tutorial
mdefault to 16 – not used by ‘vector-ivf’
ef_construction default to 64 – not used by ‘vector-ivf’
ef_search default to 40 – not used by ‘vector-ivf’
AzureCosmosDBMemoryStore parameters

The following code will employ the newly created AzureCosmosDBMemoryStore to integrate a plugin into our kernel. It is important to highlight that we will be retrieving the embedding service using the service ID “text_embedding” to produce the vector embeddings in our memory store, Azure Cosmos DB. The SemanticTextMemory entity will aid in loading our JSON data into Azure Cosmos DB and subsequently allow us to perform vector searches.

memory = SemanticTextMemory(storage=store,   
    embeddings_generator=kernel.get_service("text_embedding"))
    kernel.add_plugin(TextMemoryPlugin(memory), "TextMemoryPluginACDB")

Semantic Kernel Python Loader

Now that we have reviewed services and plugins, let’s put it all together. First, start by creating a file called loader.py in your working directory. Then, add the following Python code.

loader.py

import json
import uuid
import asyncio
from os import environ
from dotenv import load_dotenv
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import  OpenAITextEmbedding
from semantic_kernel.core_plugins.text_memory_plugin import TextMemoryPlugin
from semantic_kernel.memory.semantic_text_memory import SemanticTextMemory
from semantic_kernel.connectors.memory.azure_cosmosdb.cosmosdb_utils import (
    CosmosDBSimilarityType,
    CosmosDBVectorSearchType)
from semantic_kernel.connectors.memory.azure_cosmosdb import AzureCosmosDBMemoryStore

load_dotenv(override=True)


COSMOS_CONNECTION_STRING = environ.get("CONNECTION_STRING")
COSMOS_DATABASE = environ.get("DATABASE_NAME")
COSMOS_COLLECTION = environ.get("COLLECTION_NAME")

async def populate_memory_from_json(memory: SemanticTextMemory,json_file_path :str) -> None:
    """ Save memory information for JSON records """

    print('--populate records--')
    with open(json_file_path) as file:
        data = json.load(file)

        resourcetitle = data['title']
        resource_id = str(uuid.uuid4().hex)
        pages = data['pages']

        for page in pages:
            page_id = str(uuid.uuid4().hex)
            text = page['body']
            chapter = page['chapter']
            pagenumber = page['page']

            await memory.save_information(collection=COSMOS_COLLECTION, id=str(uuid.uuid4().hex), text=text, description="",
                additional_metadata=dict(
                    resource_id = resource_id,
                    page_id = page_id,
                    title=resourcetitle,
                    chapter = chapter,
                    pagenumber = pagenumber))



async def loader(file_path):
    
    # Vector search index parameters
    index_name = "vectorSearchIndex"
    vector_dimensions = 1536  # text-embedding-ada-002 uses a 1536-dimensional embedding vector
    num_lists = 1
    similarity = CosmosDBSimilarityType.COS
    kind=CosmosDBVectorSearchType.VECTOR_IVF

    kernel = Kernel()

    print('--add text embedding service--')
    kernel.add_service(
        OpenAITextEmbedding(
            service_id="text_embedding",
            ai_model_id="text-embedding-ada-002",
            api_key=environ.get("OPENAI_API_KEY")
        )
    )

    store =  await AzureCosmosDBMemoryStore.create(
                cosmos_connstr=COSMOS_CONNECTION_STRING,
                cosmos_api="mongo-vcore",
                database_name=COSMOS_DATABASE,
                collection_name=COSMOS_COLLECTION,
                index_name=index_name,
                vector_dimensions=vector_dimensions,
                num_lists=num_lists,
                similarity=similarity,
                kind=kind,
                application_name="",
                m = 16,
                ef_construction = 64,
                ef_search = 40
    )

    print('--add memory plugin--')
    memory = SemanticTextMemory(storage=store, embeddings_generator=kernel.get_service("text_embedding"))
    kernel.add_plugin(TextMemoryPlugin(memory), "TextMemoryPluginACDB")

    await populate_memory_from_json(memory,file_path)

    print('--done--')
    

asyncio.run(loader('Rocket_Propulsion_Elements.json'))

To complete the loading step, simply execute the following command from your working directory in order to load the sample JSON data, vectors, and create the indexes.

python loader.py

output

--add text embedding service--
--add memory plugin--
--populate records--
--done--

To verify the successful loading of the travel documents, use MongoDB Compass or a similar tool. 

MongoDB Compass - Review loaded documents and embeddings within research.resources collection on Azure CosmosDB for MongoDB
Resources collection on Azure Cosmos DB for MongoDB (MongoDB Compass)

The “resources” collection includes a field called “embedding,” which is the default label specified by Semantic Kernel and contains the text embeddings. Additional required fields include the original text, the description and metadata. I have observed thrown errors running vector searches when these fields are not present in the collection. If you have worked through the LangChain example, LangChain Vector Search with Cosmos DB for MongoDB, you may have observed that LangChain offers greater control over the fields stored in Azure Cosmos DB. The method populate_memory_from_json invokes the save_information method on our memory store, which requires the collection name, a unique ID, the text for embeddings, the description, and the additional metadata.

You will have the ability to confirm the creation of the “vectorSearchIndex” cosmoSearch index on the “embedding” field within the resources collection.

MongoDB Compass - Review vectorSearchIndex within research.resources collection on Azure Cosmos DB for MongoDB
vectorSearchIndex within Resources collection (MongoDB Compass)

Semantic Kernel Python Vector Search on Azure Cosmos DB

We will utilize much of the Python code discussed above for the OpenAI text embedding service and repurpose the Azure Cosmos DB memory store plugin for our memory search/vector search. While the core code for our kernel remains unchanged, the method invoked will shift from memory.save_information to memory.search.

query = "supersonic combustion"

vector_search_result = await memory.search(COSMOS_COLLECTION, query, limit=3)

for  results in vector_search_result:
        print(results.additional_metadata)
        print(results.relevance ) 
#Search relevance, from 0 to 1, where 1 means perfect match.

The search method is an asynchronous operation designed to execute a search against the memory store, specifically Azure Cosmos DB for MongoDB in our scenario. Upon completion, it returns an enumerable collection of MemoryQueryResult, which is subsequently iterated over to reveal the matched documents.

To perform a vector search with Semantic Kernel against the sample data loaded into Azure Cosmos DB, you can create a separate code file. Simply name the file ‘search.py’ in your working directory and add the following Python code:

search.py

import asyncio
from os import environ
from dotenv import load_dotenv
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import ( OpenAITextEmbedding)
from semantic_kernel.core_plugins.text_memory_plugin import TextMemoryPlugin
from semantic_kernel.memory.semantic_text_memory import ( MemoryQueryResult,
    SemanticTextMemory,
)
from semantic_kernel.connectors.memory.azure_cosmosdb.cosmosdb_utils import (
    CosmosDBSimilarityType,
    CosmosDBVectorSearchType)

from semantic_kernel.connectors.memory.azure_cosmosdb import ( AzureCosmosDBMemoryStore)

load_dotenv(override=True)


COSMOS_CONNECTION_STRING = environ.get("CONNECTION_STRING")
COSMOS_DATABASE = environ.get("DATABASE_NAME")
COSMOS_COLLECTION = environ.get("COLLECTION_NAME")


async def main():
    # Vector search index parameters
    index_name = "vectorSearchIndex"
    vector_dimensions = 1536  # text-embedding-ada-002 uses a 1536-dimensional embedding vector
    num_lists = 1
    similarity = CosmosDBSimilarityType.COS
    kind=CosmosDBVectorSearchType.VECTOR_IVF



    kernel = Kernel()

    kernel.add_service(
        OpenAITextEmbedding(
            service_id="text_embedding",
            ai_model_id="text-embedding-ada-002",
            api_key=environ.get("OPENAI_API_KEY")
        )
    )


    store =  await AzureCosmosDBMemoryStore.create(
                cosmos_connstr=COSMOS_CONNECTION_STRING,
                cosmos_api="mongo-vcore",
                database_name=COSMOS_DATABASE,
                collection_name=COSMOS_COLLECTION,
                index_name=index_name,
                vector_dimensions=vector_dimensions,
                num_lists=num_lists,
                similarity=similarity,
                kind=kind,
                application_name="",
                m = 16,
        ef_construction = 64,
        ef_search = 40
    )

    memory = SemanticTextMemory(storage=store, embeddings_generator=kernel.get_service("text_embedding"))
    kernel.add_plugin(TextMemoryPlugin(memory), "TextMemoryPluginACDB")


    query = "supersonic combustion"

    vector_search_result = await memory.search(COSMOS_COLLECTION, query, limit=3)

    for  results in vector_search_result:
        print(results.additional_metadata)
        print(results.relevance ) #Search relevance, from 0 to 1, where 1 means perfect match.


asyncio.run(main())

Now that our Python file ‘search.py’ is located in our working directory, it is time to initiate a search.

python search.py

The output will consist of the matching documents (limited to 3), along with their similarity score, which ranges from 0 to 1, with 1 indicating a perfect match.

{"text": "\nFuel Injection\nCompressor\nsection\nCombustion\nsection\nTurbine\nsection\nAfterburner\nand nozzle \nsection\nShaft\nFIGURE 1\u20131. Simpli\ufb01ed schematic diagram of a turbojet engine.\nFIGURE 1\u20132. Simpli\ufb01ed diagram of a ramjet with a supersonic inlet (converging and\ndiverging \ufb02ow passage).\nto near their design \ufb02ight speed to become functional. The primary applications of\nramjets with subsonic combustion have been in shipboard and ground-launched\nantiaircraft missiles. Studies of a hydrogen-fueled ramjet for hypersonic aircraft\nlook promising. The supersonic \ufb02ight vehicle is a combination of a ramjet-driven\nhigh-speed airplane and a one- or two-stage rocket booster. It can travel at speeds\nup to a Mach number of 25 at altitudes of up to 50,000 m.\n1.2. ROCKET PROPULSION\nRocket propulsion systems can be classi\ufb01ed according to the type of energy\nsource (chemical, nuclear, or solar), the basic function (booster stage, sustainer\nor upper stages, attitude control, orbit station keeping, etc.), the type of vehicle\n(aircraft, missile, assisted takeoff, space vehicle, etc.), size, type of propellant,\ntype of construction, or number of rocket propulsion units used in a given vehicle.\nAnother way is to classify by the method of producing thrust. A thermody-\nnamic expansion of a gas is used in the majority of practical rocket propulsion\nconcepts. The internal energy of the gas is converted into the kinetic energy of\nthe exhaust \ufb02ow and the thrust is produced by the gas pressure on the surfaces\nexposed to the gas, as will be explained later. This same thermodynamic the-\nory and the same generic equipment (nozzle) is used for jet propulsion, rocket\npropulsion, nuclear propulsion, laser propulsion, solar-thermal propulsion, and\nsome types of electrical propulsion. Totally different methods of producing thrust", "description": "", "additional_metadata": {"resource_id": "a6ff0e1f989d48cd883221b12b9c750e", "page_id": "1b5b5f8e1ff94a60af9be762c205368a", "title": "Rocket Propulsion Elements", "chapter": "Chapter 1", "pagenumber": 4}}
0.8406440866185456
{"text": "\nFIGURE 1\u20135. Simpli\ufb01ed perspective three-quarter section of a typical solid propellant\nrocket motor with the propellant grain bonded to the case and the insulation layer and\nwith a conical exhaust nozzle. The cylindrical case with its forward and aft hemispherical\ndomes form a pressure vessel to contain the combustion chamber pressure. Adapted with\npermission from Ref. 12\u20131.\ncharge is called the grain and it contains all the chemical elements for complete\nburning. Once ignited, it usually burns smoothly at a predetermined rate on all the\nexposed internal surfaces of the grain. Initial burning takes place at the internal\nsurfaces of the cylinder perforation and the four slots. The internal cavity grows\nas propellant is burned and consumed. The resulting hot gas \ufb02ows through the\nsupersonic nozzle to impart thrust. Once ignited, the motor combustion proceeds\nin an orderly manner until essentially all the propellant has been consumed. There\nare no feed systems or valves.
See Refs. 1\u20137 to 1\u201310.\nLiquid and solid propellants, and the propulsion systems that use them, are dis-\ncussed in Chapters 6 to 11 and 12 to 15, respectively. Liquid and solid propellant\nrocket propulsion systems are compared in Chapter 19.\nGaseous propellant rocket engines use a stored high-pressure gas, such as air,\nnitrogen, or helium, as their working \ufb02uid or propellant. The stored gas requires\nrelatively heavy tanks. These cold gas engines have been used on many early\nspace vehicles for low thrust maneuvers and for attitude control systems and\nsome are still used today. Heating the gas by electrical energy or by combus-\ntion of certain monopropellants improves the performance and this has often\nbeen called warm gas propellant rocket propulsion. Chapter 7 discusses gaseous\npropellants.\nHybrid propellant rocket propulsion systems use both a liquid and a solid\npropellant. For example, if a liquid oxidizing agent is injected into a combus-\ntion chamber \ufb01lled with a solid carbonaceous fuel grain, the chemical reaction\nproduces hot combustion gases (see Fig. 1\u20136). They are described further in\nChapter 16. Several have \ufb02own successfully.", "description": "", "additional_metadata": {"resource_id": "a6ff0e1f989d48cd883221b12b9c750e", "page_id": "333e23bb7e324e76ad5cdd8aba9c5163", "title": "Rocket Propulsion Elements", "chapter": "Chapter 1", "pagenumber": 8}}
0.8283756026141529
{"text": "\nTABLE 1\u20132. Comparison of Several Characteristics of a Typical Chemical Rocket and\nTwo-Duct Propulsion Systems\nChemical\nRocket Engine\nor Rocket\nFeature\nMotor\nTurbojet Engine\nRamjet
Engine\nThrust-to-weight ratio,\ntypical\n75:1\n5:1, turbojet and\nafterburner\n7:1 at Mach 3 at\n30,000 ft\nSpeci\ufb01c fuel consumption\n(pounds of propellant or\nfuel per hour per pound\nof thrust)a\n8\u201314\n0.5\u20131.5\n2.3\u20133.5\nSpeci\ufb01c thrust (pounds of\nthrust per square foot\nfrontal area)b\n5000\u201325,000\n2500 (low Mach at\nsea level)\n2700 (Mach 2 at sea\nlevel)\nThrust change with altitude\nSlight increase\nDecreases\nDecreases\nThrust vs. \ufb02ight speed\nNearly constant Increases with\nspeed\nIncreases with speed\nThrust vs. air temperature\nConstant\nDecreases with\ntemperature\nDecreases with\ntemperature\nFlight speed vs. exhaust\nvelocity\nUnrelated,\n\ufb02ight speed\ncan be\ngreater\nFlight speed\nalways less than\nexhaust velocity\nFlight speed always\nless than exhaust\nvelocity\nAltitude limitation\nNone; suited to\nspace travel\n14,000\u201317,000 m\n20,000 m at Mach 3\n30,000 m at Mach 5\n45,000 m at Mach 12\nSpeci\ufb01c impulse, typicalc\n(thrust force per unit\npropellant or fuel weight\n\ufb02ow per second)\n270 sec\n1600 sec\n1400 sec\naMultiply by 0.102 to convert to kg/(hr-N).\nbMultiply by 47.9 to convert to N/m2\ncSpeci\ufb01c impulse is a performance parameter and is de\ufb01ned in Chapter 2.\nAt supersonic \ufb02ight speeds above Mach 2, the ramjet engine (a pure duct\nengine) becomes attractive for \ufb02ight within the atmosphere. Thrust is produced\nby increasing the momentum of the air as it passes through the ramjet, basically as\nis accomplished in the turbojet and turbofan engines but without compressors or\nturbines. Figure 1\u20132 shows the basic components of one type of ramjet. Ramjets\nwith subsonic combustion and hydrocarbon fuel have an upper speed limit of\napproximately Mach 5; hydrogen fuel, with hydrogen cooling, raises this to at\nleast Mach
16. Ramjets with supersonic combustion are known as scramjets and\nhave \ufb02own in experimental vehicles. All ramjets depend on rocket boosters or\nsome other method (such as being launched from an aircraft) for being accelerated", "description": "", "additional_metadata": {"resource_id": "a6ff0e1f989d48cd883221b12b9c750e", "page_id": "67c10eb47e1d43d2a0130ff8b2f561ab", "title": "Rocket Propulsion Elements", "chapter": "Chapter 1", "pagenumber": 3}}
0.8222216612762712

Conclusion

The Semantic Kernel is truly a game-changer in the world of AI SDKs. It is simple and yet powerful programming model makes it incredibly easy to enhance your app with advanced language capabilities very quickly. One of the aspects that stands out about the Semantic Kernel is the seamless implementation of plugins and service connections, which greatly enhances its flexibility and adaptability within different applications.

Although the abundance of resources related to the Semantic Kernel is impressive, it is worth noting that there was initially a shortage of Python examples. However, it is truly promising that the community’s enthusiasm and the growing adoption of the Semantic Kernel will likely lead to an increase in the availability of diverse code samples and implementations online. This expansion of resources will undoubtedly further empower developers to leverage the full potential of the Semantic Kernel in their projects.

Leave a Reply