Sometimes, we want the Kimi large language model to continue a given sentence. For example, in some customer service scenarios, we want the smart robot to start every sentence with βDear customer, hello.β For such needs, the Kimi API offers Partial Mode. Letβs use specific code to explain how Partial Mode works:Documentation Index
Fetch the complete documentation index at: https://platform.kimi.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- python
- node.js
- Add an extra message at the end of the messages list, with
role=assistantandpartial=True; - Place the content you want to βfeedβ to the Kimi large language model in the
contentfield. The model will start generating the response from this content; - Concatenate the content from step 2 with the response generated by the Kimi large language model to form the complete reply;
max_tokens value to be set too low. This can result in the Kimi large language model being unable to output the complete response (in this case, the value of finish_reason is length, meaning the number of tokens in the generated response exceeds the max_tokens value set in the request). In such situations, if you are satisfied with the already output content and want the Kimi large language model to continue from where it left off, Partial Mode can be very useful.
Letβs use a simple example to explain how to implement this:
- python
- node.js
The name Field in Partial Mode
The name field in Partial Mode is a special attribute that enhances the modelβs understanding of its role, compelling it to output content in the voice of the specified character. To illustrate how the name field is used in Partial Mode, letβs consider an example of role-playing with the Kimi large language model, using the character Dr. Kelsier from the mobile game Arknights. By setting "name": "Kelsier", we ensure the model maintains character consistency, with the name field acting as a prefix for the output, prompting the Kimi large language model to respond as Kelsier:
- python
- node.js
Other Tips for Maintaining Character Consistency
There are also some general methods to help large language models maintain character consistency during long conversations:- Provide clear character descriptions. For example, as we did above, when setting up a character, give a detailed introduction of their personality, background, and any specific traits or quirks they might have. This will help the Kimi large language model better understand and imitate the character;
- Add more details about the character they are supposed to play. This includes their tone of voice, style, personality, and even background, such as backstory and motivations. For example, we provided some quotes from Kelsie above;
- Guide how to act in various situations. If you expect the character to encounter certain types of user input, or if you want to control the modelβs output in some situations during the role - playing interaction, you should provide clear instructions and guidelines in the system prompt, explaining how the character should act in these situations;
- If the conversation goes on for many rounds, you can also periodically use the system prompt to reinforce the characterβs settings, especially when the model starts to deviate. For example:
- python
- node.js