Kush Verma
Kush Verma

Reputation: 37

Running Databricks Dolly locally on my Mac M1

I am trying to deploy and run Databricks Dolly, which a latest released opensource LLM model as an alternate option to gpt

Doc - https://learn.microsoft.com/en-us/azure/architecture/aws-professional/services

Tried to run this with hugging dace transformers

Code -


tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v1-6b")

model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v1-6b")

import numpy as np
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    PreTrainedModel,
    PreTrainedTokenizer
)

tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v1-6b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v1-6b", device_map="auto", trust_remote_code=True, offload_folder='offload')

PROMPT_FORMAT = """Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
"""


def generate_response(instruction: str, *, model: PreTrainedModel, tokenizer: PreTrainedTokenizer,
                      do_sample: bool = True, max_new_tokens: int = 256, top_p: float = 0.92, top_k: int = 0,
                      **kwargs) -> str:
    input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cuda")

    # each of these is encoded to a single token
    response_key_token_id = tokenizer.encode("### Response:")[0]
    end_key_token_id = tokenizer.encode("### End")[0]

    gen_tokens = model.generate(input_ids, pad_token_id=tokenizer.pad_token_id, eos_token_id=end_key_token_id,
                                do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs)[
        0].cpu()

    # find where the response begins
    response_positions = np.where(gen_tokens == response_key_token_id)[0]

    if len(response_positions) >= 0:
        response_pos = response_positions[0]

        # find where the response ends
        end_pos = None
        end_positions = np.where(gen_tokens == end_key_token_id)[0]
        if len(end_positions) > 0:
            end_pos = end_positions[0]

        return tokenizer.decode(gen_tokens[response_pos + 1: end_pos]).strip()

    return None


# Sample similar to: "Excited to announce the release of Dolly, a powerful new language model from Databricks! #AI #Databricks"
generate_response("Write a tweet announcing Dolly, a large language model from Databricks.", model=model,
                  tokenizer=tokenizer)

I am getting following error -

AssertionError: Torch not compiled with CUDA enabled

While looking on internet I found - *PyTorch only supports CUDA on x86_64 architectures, so CUDA support is not available for Apple M1 Macs. *

What shoud I do ?

Upvotes: 0

Views: 2294

Answers (2)

aquaman
aquaman

Reputation: 1658

As pointed out here M1 does not support CUDA.

You can however generate response using cpu (takes little bit of time) -

input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cpu")
...
gen_tokens = model.generate(input_ids, pad_token_id=tokenizer.pad_token_id, eos_token_id=end_key_token_id,
                                do_sample=do_sample, max_new_tokens=max_new_tokens, top_p=top_p, top_k=top_k, **kwargs)[
        0].cpu()

And run it like -

# Sample similar to: "Excited to announce the release of Dolly, a powerful new language model from Databricks! #AI #Databricks"
res = generate_response("Write a tweet announcing Dolly, a large language model from Databricks.", model=model,
                  tokenizer=tokenizer)

print(res)

Which should give something like -

Introducing Dolly: the largest, most accurate language model ever! Get ready to have conversations that make sense! #Databricks #LanguageModel

Upvotes: 0

platypus
platypus

Reputation: 516

M1 does not come with CUDA support, you probably need to remove .to("cuda") to make this work.

input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cuda")

Upvotes: 0

Related Questions