it编程 > 前端脚本 > Erlang

Ruby langchainrb gem and custom configuration for the model setup

166人参与 2024-07-28 Erlang

题意:ruby 的 langchainrb gem 以及针对模型设置的自定义配置

问题背景:

i am working in a prototype using the gem langchainrb. i am using the module assistant module to implemente a basic rag architecture.

我正在使用 langchainrb 这个 gem 来开发一个原型。我利用其中的 assistant 模块来实现一个基本的 rag(retrieval-augmented generation,检索增强生成)架构。

everything works, and now i would like to customize the model configuration.

一切运行正常,现在我想自定义模型的配置。

in the documenation there is no clear way of setting up the model. in my case, i would like to use openai and use:

在文档中,没有明确的方法来设置模型。在我的情况下,我想使用 openai 并使用以下配置:

in the readme, there is a mention about using llm_options.

在 readme 文件中,提到了使用 llm_options

if i go to the openai module documentation:

如果我去查看 openai 模块的文档:

it says i have to check here:        它说我要查看这里:

but there is not any mention of temperature, for example. also, in the example in the langchain::llm::openai documentation, the options are totally different.

但是在文档中并没有提到例如“温度”这样的设置。此外,在 langchain::llm::openai 的文档示例中,给出的选项是完全不同的。

i am working in a prototype using the gem langchainrb. i am using the module assistant module to implemente a basic rag architecture.

everything works, and now i would like to customize the model configuration.

in the documenation there is no clear way of setting up the model. in my case, i would like to use openai and use:

in the readme, there is a mention about using llm_options.

if i go to the openai module documentation:

it says i have to check here:

but there is not any mention of temperature, for example. also, in the example in the langchain::llm::openai documentation, the options are totally different.

# ruby-openai options:

config_keys = %i[
  api_type
  api_version
  access_token
  log_errors
  organization_id
  uri_base
  request_timeout
  extra_headers
].freeze
# example in class: langchain::llm::openai documentation: 

{
  n: 1,
  temperature: 0.0,
  chat_completion_model_name: "gpt-3.5-turbo",
  embeddings_model_name: "text-embedding-3-small"
}.freeze

问题解决:

i have a conflict between llm_options and default_options. i thought it was the same with different priorities.

我在 llm_options 和 default_options 之间遇到了冲突。我原本以为它们只是优先级不同的相同设置。

for the needs expressed in the question i have to use the default_options as in here:

针对问题中表达的需求,我必须按照这里的示例来使用 default_options

llm =
  langchain::llm::openai.new(
    api_key: <openai_key>,
    default_options: {
      temperature: 0.0,
      chat_completion_model_name: "gpt-4o"
    }
  )

(0)
打赏 微信扫一扫 微信扫一扫

您想发表意见!!点此发布评论

推荐阅读

RabbitMQ 安装分享

07-28

最全干货!使用Docker构建RabbitMQ高可用负载均衡集群,附答案+考点

07-28

RabbitMQ详解与实战(绝对足够惊喜)

07-28

RabbitMQ 消息丢失的场景,如何保证消息不丢失?

07-28

25道RabbitMQ面试题含答案(很全)

07-28

【RabbitMQ】【Docker】基于docker-compose构建rabbitmq容器

07-28

猜你喜欢

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论