Deep Lake offers highly-flexible vector search and hybrid search options discussed in detail in this tutorial.
Performing Vector Search
First, let's show a simple example of vector search using default options, which performs simple cosine similarity search in Python on the client (your machine).
prompt ="What do trust and safety models do?"search_results = vector_store.search(embedding_data=prompt, embedding_function=embedding_function)
The search_results is a dictionary with keys for the text, score, id, and metadata, with data ordered by score. By default, the search returns the top 4 results which can be verified using:
len(search_results['text'])# Returns 4
If we examine the first returned text, it appears to contain the text about trust and safety models that is relevant to the prompt.
search_results['text'][0]
Returns:
Trust and Safety Models
=======================
We decided to open source the training code of the following models:
- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.
- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.
- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.
- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.
We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.
We can also retrieve the corresponding filename from the metadata, which shows the top result came from the README.
The first search result with the L2 distance metric returns the same text as the previous Cos search:
search_results['text'][0]
Returns:
Trust and Safety Models
=======================
We decided to open source the training code of the following models:
- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.
- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.
- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.
- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.
We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.
Full Customization of Vector Search
Deep Lake's Compute Engine can be used to rapidly execute a variety of different search logic. It is available with !pip install "deeplake[enterprise]" (Make sure to restart your kernel after installation), and it is only available for data stored in or connected to Deep Lake.
Let's load a representative Vector Store that is already stored in Deep Lake Tensor Database. If data is not being written, is advisable to use read_only = True.
prompt ="What do trust and safety models do?"embedding =embedding_function(prompt)[0]# Format the embedding array or list as a string, so it can be passed in the REST API request.embedding_string =",".join([str(item) for item in embedding])tql_query =f"select * from (select text, cosine_similarity(embedding, ARRAY[{embedding_string}]) as score) order by score desc limit 5"
Let's run the query, noting that the query execution happens in the Managed Tensor Database, and not on the client.
If we examine the first returned text, it appears to contain the same text about trust and safety models that is relevant to the prompt.
search_results['text'][0]
Returns:
Trust and Safety Models
=======================
We decided to open source the training code of the following models:
- pNSFWMedia: Model to detect tweets with NSFW images. This includes adult and porn content.
- pNSFWText: Model to detect tweets with NSFW text, adult/sexual topics.
- pToxicity: Model to detect toxic tweets. Toxicity includes marginal content like insults and certain types of harassment. Toxic content does not violate Twitter's terms of service.
- pAbuse: Model to detect abusive content. This includes violations of Twitter's terms of service, including hate speech, targeted harassment and abusive behavior.
We have several more models and rules that we are not going to open source at this time because of the adversarial nature of this area. The team is considering open sourcing more models going forward and will keep the community posted accordingly.
We can also retrieve the corresponding filename from the metadata, which shows the top result came from the README.