Quite impressive, able to run a LLM on my local mac
% ./bin/gpt-2 -m models/gpt-2-117M/ggml-model.bin -p "Let's talk about Machine Learning now"
main: seed = 1686112244
gpt2_model_load: loading model from 'models/gpt-2-117M/ggml-model.bin'
gpt2_model_load: n_vocab = 50257
gpt2_model_load: n_ctx = 1024
gpt2_model_load: n_embd = 768
gpt2_model_load: n_head = 12
gpt2_model_load: n_layer = 12
gpt2_model_load: ftype = 1
gpt2_model_load: qntvr = 0
gpt2_model_load: ggml tensor size = 224 bytes
gpt2_model_load: ggml ctx size = 384.77 MB
gpt2_model_load: memory size = 72.00 MB, n_mem = 12288
gpt2_model_load: model size = 239.08 MB
extract_tests_from_file : No test file found.
test_gpt_tokenizer : 0 tests failed out of 0 tests.
main: prompt: 'Let's talk about Machine Learning now'
main: number of tokens in prompt = 7, first 8 tokens: 5756 338 1561 546 10850 18252 783
Let's talk about Machine Learning now.
The first step is to get a good understanding of what machine learning is. This is where things get messy. What do you think is the most difficult aspect of machine learning?
Machine learning is the process of transforming data into an understanding of its contents and its operations. For example, in the following diagram, you can see that we use a machine learning approach to model an object.
The object is a piece of a puzzle with many different components and some of the problems it solves will be difficult to solve for humans.
What do you think of machine learning as?
Machine learning is one of the most important, because it can help us understand how our data are structured. You can understand the structure of the data as the object is represented in its representation.
What about data structures? How do you find out where a data structure or a structure is located in your data?
In a lot of fields, you can think of structures as
main: mem per token = 2008284 bytes
main: load time = 366.33 ms
main: sample time = 39.59 ms
main: predict time = 3448.31 ms / 16.74 ms per token
main: total time = 3894.15 ms