Reputation: 103
I have pushed the custom model named as Orcawise/eu-ai-act-align on hugging face [A popular platform for sharing and discovering natural language processing models and other AI-related resources].
I have created this model by training google/gemma-2b on custom data. Now when I am trying to Use this model with the Inference API (serverless) using the below code
import requests
API_URL = "https://api-inference.huggingface.co/models/Orcawise/eu-ai-act-align"
headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "Can you please let us know more details about your ",
})
print("Response from custom model is {}".format(output))
I am getting response as below
{'error': 'You are trying to access a gated repo.\nMake sure to request access at https://huggingface.co/google/gemma-2b and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.'}
Solutions tried
Question is why this happens when access to pretrained model is already there and what is the best way to use pretrained model with the custom model to use with inference api (serverless). Any ideas are welcomed.
Upvotes: 0
Views: 434