How do I choose the right model(s)?
Overview
Models – also known as Large Language Models or LLMs – are artificial intelligence programs that can perform a wide range of tasks including generating text, translating between languages, summarizing information, and generating images. Dartmouth Chat provides access to a range of LLMs. Some are hosted locally, while others are cloud-based options available through third-parties.
The list of models can be found in the upper-left of the Dartmouth Chat webpage. Models and model versions will be added and updated as the tool continues to evolve.
Before selecting a model, it’s important to consider a few things:
-
Data Privacy. Dartmouth Chat supports access to both local (internal) models such as GPT-OSS and Llama and cloud-based (external) models such as Claude and GPT. With regard to local models, all data is hosted on Dartmouth computers and is not shared within or outside the College for any purpose. Models marked as external are hosted remotely by third-party providers, and all data shared is subject to the specific provider’s terms. Links to their privacy statements can be found here.
-
Cost. Locally hosted models like Llama and Mistral can be used on an unlimited basis by members of the Dartmouth community, while third-party models like Claude and GPT require the use of tokens. Tokens are limited to 2,000/day per individual, though arrangements can be made to purchase additional tokens if needed. The relative cost of the third-party models can be seen next to their names (e.g., FREE, $, or $$$). If you are considering a specific project, you can use the Dartmouth Tokenizer tool to understand what your token use might look at relative to specific models.
-
Capability. The terms VISION and REASONING available by clicking the tag icon next to the model’s name refer to specific capabilities available to the particular model. VISION, for example, refers to the ability of the model to interpret images and videos. REASONING allows the model to draw connections between different data, make judgments, and weigh the benefit of potential actions.
Regardless of your selection, it is recommended that you test your prompts and workflow with multiple models to establish what produces the best results for your purposes. You can click the + sign to the right of the model name and select additional models to simultaneously run your prompt against more than one model.
Checking Model Status
In the bottom toolbar, you’ll find a link to “Model Status”. From this menu, you can get real-time information about the health and availability of the models supported by Dartmouth Chat. If the model has a question mark next to it, it’s because it does not provide real-time information via its API. To get information about that type of model, click on the question mark to be transported to the model’s online status web page.
Didn’t find what you needed? Please reach out to research.computing@dartmouth.edu.