Minor Changes
-
#4766
a4d42c5Thanks @IMax153! - This release includes a complete refactor of the internals of the base@effect/ailibrary, with a focus on flexibility for the end user and incorporation of more information from model providers.Notable Changes
AiLanguageModelandAiEmbeddingModelThe
Completionsservice from@effect/aihas been renamed toAiLanguageModel, and theEmbeddingsservice has similarly been renamed toAiEmbeddingModel. In addition,Completions.createandCompletions.toolkithave been unified intoAiLanguageModel.generateText. Similarly,Completions.streamandCompletions.toolkitStreamhave been unified intoAiLanguageModel.streamText.Structured Outputs
Completions.structuredhas been renamed toAiLanguageModel.generateObject, and this method now returns a specializedAiResponse.WithStructuredOutputtype, which contains avalueproperty with the result of the structured output call. This enhancement prevents the end user from having to unnecessarily unwrap anOption.AiModelandAiPlanThe
.providemethod on a builtAiModel/AiPlanhas been renamed to.useto improve clarity given that a user is using the services provided by the model / plan to run a particular piece of code.In addition, the
AiPlan.fromModelconstructor has been simplified intoAiPlan.make, which allows you to create an initialAiPlanwith multiple steps incorporated.For example:
import { AiPlan } from "@effect/ai" import { OpenAiLanguageModel } from "@effect/ai-openai" import { AnthropicLanguageModel } from "@effect/ai-anthropic" import { Effect } from "effect" const main = Effect.gen(function* () { const plan = yield* AiPlan.make( { model: OpenAiLanguageModel.model("gpt-4"), attempts: 1 }, { model: AnthropicLanguageModel.model("claude-3-7-sonnet-latest"), attempts: 1 }, { model: AnthropicLanguageModel.model("claude-3-5-sonnet-latest"), attempts: 1 } ) yield* plan.use(program) })
AiInputandAiResponseThe
AiInputandAiResponsetypes have been refactored to allow inclusion of more information and metadata from model providers where possible, such as reasoning output and prompt cache token utilization.In addition, for an
AiResponseyou can now access metadata that is specific to a given provider. For example, when using OpenAi to generate audio, you can check the input and output audio tokens used:import { OpenAiLanguageModel } from "@effect/ai-openai" import { Effect, Option } from "effect" const getDadJoke = OpenAiLanguageModel.generateText({ prompt: "Generate a hilarious dad joke" }) Effect.gen(function* () { const model = yield* OpenAiLanguageModel.model("gpt-4o") const response = yield* model.use(getDadJoke) const metadata = response.getProviderMetadata( OpenAiLanguageModel.ProviderMetadata ) if (Option.isSome(metadata)) { console.log(metadata.value) } })
AiToolandAiToolkitThe
AiToolkithas been completely refactored to simplify creating a collection of tools and using those tools in requests to model providers. A newAiTooldata type has also been introduced to simplify defining tools for a toolkit.AiToolkit.implementhas been renamed toAiToolkit.toLayerfor clarity, and defining handlers is now very similar to the way handlers are defined in the@effect/rpclibrary.A complete example of an
AiToolkitimplementation and usage can be found below:import { AiLanguageModel, AiTool, AiToolkit } from "@effect/ai" import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai" import { FetchHttpClient, HttpClient, HttpClientRequest, HttpClientResponse } from "@effect/platform" import { NodeHttpClient, NodeRuntime } from "@effect/platform-node" import { Array, Config, Console, Effect, Layer, Schema } from "effect" // ============================================================================= // Domain Models // ============================================================================= const DadJoke = Schema.Struct({ id: Schema.String, joke: Schema.String }) const SearchResponse = Schema.Struct({ current_page: Schema.Int, limit: Schema.Int, next_page: Schema.Int, previous_page: Schema.Int, search_term: Schema.String, results: Schema.Array(DadJoke), status: Schema.Int, total_jokes: Schema.Int, total_pages: Schema.Int }) // ============================================================================= // Service Definitions // ============================================================================= export class ICanHazDadJoke extends Effect.Service<ICanHazDadJoke>()( "ICanHazDadJoke", { dependencies: [FetchHttpClient.layer], effect: Effect.gen(function* () { const httpClient = (yield* HttpClient.HttpClient).pipe( HttpClient.mapRequest( HttpClientRequest.prependUrl("https://icanhazdadjoke.com") ) ) const httpClientOk = HttpClient.filterStatusOk(httpClient) const search = Effect.fn("ICanHazDadJoke.search")(function ( term: string ) { return httpClientOk .get("/search", { acceptJson: true, urlParams: { term } }) .pipe( Effect.flatMap(HttpClientResponse.schemaBodyJson(SearchResponse)), Effect.orDie ) }) return { search } as const }) } ) {} // ============================================================================= // Toolkit Definition // ============================================================================= export class DadJokeTools extends AiToolkit.make( AiTool.make("GetDadJoke", { description: "Fetch a dad joke based on a search term from the ICanHazDadJoke API", success: DadJoke, parameters: Schema.Struct({ searchTerm: Schema.String }) }) ) {} // ============================================================================= // Toolkit Handlers // ============================================================================= export const DadJokeToolHandlers = DadJokeTools.toLayer( Effect.gen(function* () { const icanhazdadjoke = yield* ICanHazDadJoke return { GetDadJoke: (params) => icanhazdadjoke.search(params.searchTerm).pipe( Effect.flatMap((response) => Array.head(response.results)), Effect.orDie ) } }) ).pipe(Layer.provide(ICanHazDadJoke.Default)) // ============================================================================= // Toolkit Usage // ============================================================================= const makeDadJoke = Effect.gen(function* () { const languageModel = yield* AiLanguageModel.AiLanguageModel const toolkit = yield* DadJokeTools const response = yield* languageModel.generateText({ prompt: "Come up with a dad joke about pirates", toolkit }) return yield* languageModel.generateText({ prompt: response }) }) const program = Effect.gen(function* () { const model = yield* OpenAiLanguageModel.model("gpt-4o") const result = yield* model.provide(makeDadJoke) yield* Console.log(result.text) }) const OpenAi = OpenAiClient.layerConfig({ apiKey: Config.redacted("OPENAI_API_KEY") }).pipe(Layer.provide(NodeHttpClient.layerUndici)) program.pipe( Effect.provide([OpenAi, DadJokeToolHandlers]), Effect.tapErrorCause(Effect.logError), NodeRuntime.runMain )
Patch Changes
- Updated dependencies []:
- @effect/experimental@0.45.1