Japanese chat AI 'Vicuna-13B' with accuracy comparable to ChatGPT and Google's Bard has been released, so I tried using it



Research teams such as the University of California, Berkeley have released an open source large-scale language model ' Vicuna-13B '. Vicuna-13B can generate answers with accuracy close to OpenAI's ChatGPT and Google's Bard, and it also supports Japanese. A demo that can actually move was also released, so I tried using it.

Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | by the Team with members from UC Berkeley, CMU, Stanford, and UC San Diego

https://vicuna.lmsys.org/



GitHub - lm-sys/FastChat: An open platform for training, serving, and evaluating large language model based chatbot.

https://github.com/lm-sys/FastChat

The research team has developed Vicuna-13B, an open source chat AI, inspired by the announcement ofLLaMA and Alpaca 7B . The research team points out that the existing chat AI does not disclose the details of the learning method and architecture, ``It hinders innovation in research in the AI field and open source development,'' and widely publishes the research results. emphasizes the significance of

Vicuna-13B outperforms other open-source large-scale language models such as Alpaca 7B by fine-tuning LLaMA's base model based on data from the ShareGPT extension, which allows sharing of ChatGPT interactions and prompts. Achieving quality performance.

As a result of the research team evaluating the response quality of various chat AIs, when ChatGPT was 100%, LLaMA was 68% and Alpaca 7B was 76%, while Vicuna-13B's quality was approaching 92%.



The research team initially collected approximately 70,000 conversations from ShareGPT as training for Vicuna-13B. After that, we used eight

NVIDIA A100s to enhance the training script provided by Aplaca, learning to properly handle long conversations and sentence generation.

Vicuna-13B can generate higher quality sentences than LLaMA and Alpaca 7B by performing training based on ShareGPT data, but like other large-scale language models, inference and advanced computation are required. They are said to be poor at the tasks they perform. In addition, there are cases where the accuracy of the output sentences is limited, and safety such as bias reduction is not sufficiently optimized.

Vicuna-13B has a free web demo published on the following site, and you can actually try it.

Vicuna-13B
https://chat.lmsys.org/



I will try Vicuna-13B at once. First, select 'Vicuna-13B' from the drop-down list.



Enter the question you want to ask in the input field at the bottom and click the Enter key or the 'Send' button. This time, enter 'Can you use Japanese?'



Then Vicuna-13B immediately replied, 'Yes, I can use Japanese. If you have any problems, please feel free to ask me.'



Then type 'Are there any censorships or restrictions in this chat?'



Vicuna-13B immediately replied, ``We may refuse to answer questions about privacy or socially criticized content.''



Also, type 'What questions are inappropriate in this chat? Please provide example sentences.'



Vicuna-13B was able to output detailed examples of inappropriate questions.



Vicuna-13B is only permitted for non-commercial use in accordance with LLaMA's license , OpenAI's Terms of Service , and ShareGPT's privacy policy .

in Review,   Software, Posted by log1r_ut