Text Generation
Transformers
Safetensors
English
falcon_mamba
conversational
Inference Endpoints

The model doesn't load into Google colab with 4bit nf4 quantization. Why?

#12
by perceptron-743 - opened

I just had a little query about this model. Is it not possible to load this model into a 15 GB VRAM of google colab? I have been trying to load it using the following quantization config:

# defining the config
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
)

But it gives cuda out of memory error every single time.

If it can't be loaded, then I'm confused as to why not? Since mistral-7B loads just fine and it's safetensors take a lot more memory than this model's. So by contrast I feel this model should load. But maybe it's because of something I have done wrong, that it doesn't load. I would really appreciate it if you resolved my query.

Here is the code which I have used to load and quantize it.


def generate_result(summary_model: str, config: BitsAndBytesConfig,
                    document_ids: Dict[str, str], device = "cpu") -> List[Dict[str, str]]:
    # summarization models
    summarizer = pipeline("summarization", model=summary_model,
                          device = device, quantization_config=config)

    # zero-shot-classification models
    docs = []
    for document_name, document_id in tqdm.tqdm(document_ids.items()):
        print("-"*100)
        print("Document Name: %s" % document_name)

        # timing the duration
        begin = time.time()
        texts = get_pdf_by_code(document_id)
        summary = summarizer(texts, max_length=300, truncation=True, do_sample=False)

        summary = " ".join(item["summary_text"] for item in summary)
        pprint.pprint("-"*100)
        duration = time.time() - begin

        docs.append({
            "document_name": document_name,
            "summary": summary,
            "seconds": duration,
            "model_name": summary_model,
        })

    return docs

model_checkpoint = "tiiuae/falcon-mamba-7b-instruct"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
output = generate_result(summary_model=model_checkpoint,
                         config = nf4_config, document_ids=hashcodes, device=device)
df = pd.DataFrame(output)
Technology Innovation Institute org

Sign up or log in to comment