jorge-henao's picture
Update README.md
4be9cbd
|
raw
history blame
4.03 kB
metadata
license: apache-2.0
datasets:
  - hackathon-somos-nlp-2023/ask2democracy-cfqa-salud-pension
language:
  - es
library_name: transformers
pipeline_tag: text2text-generation
tags:
  - democracy
  - public debate
  - question answering
  - RAG
  - Retrieval Augmented Generation

license: apache-2.0

About Ask2Democracy project


About Ask2Democracy project

This model was developed as part of the Ask2Democracy project during the 2023 Somos NLP Hackathon. Our focus during the hackathon was on enhancing the generative capabilities in spanish training an open source model for this purpose, which is intended to be incorporated into the space demo. However, we encountered performance limitations due to the model's large size, which caused issues when running it on limited hardware. Specifically, we observed an inference time of approximately 70 seconds when using a GPU.

To address this issue, we are currently working on optimizing ways to integrate the model into the AskDemocracy space demo. Remaining work is required in order to improve the model's performance. Further updates are expected to be integrated in the AskDemocracy space demo

Developed by:

What's baizemocracy-lora-7B-cfqa-conv model?

This model is an open-source chat model fine-tuned with LoRA inspired by Baize project. It was trained with the Baize datasets and the ask2democracy-cfqa-salud-pension dataset, wich contains almost 4k instructions to answers questions based on a context relevant to citizen concerns and public debate in spanish.

Two model variations was trained during the Hackathon Somos NLP 2023:

  • A generative context focused model
  • A conversational style focused model

This model variation is more focused on source based augmented retrieval generation. See Pre-proccessing dataset section. This model variation is focused in a more conversational way of asking questions. Baizemocracy-conv.

Testing is a work in progress, we decide to share both model variations with community in order to invovle more people experimenting what it works better and find other possible use cases.

Training Parameters

  • Base Model: LLaMA-7B
  • Training Epoch: 1
  • Batch Size: 16
  • Maximum Input Length: 512
  • Learning Rate: 2e-4
  • LoRA Rank: 8
  • Updated Modules: All Linears

Training Dataset

About pre-processing

Ask2Democracy-cfqa-salud-pension dataset was formated like this::

def format_ds(example):
  example["text"] =  (
    "Given the Context answer the Question. Answers must be source based, use topics to elaborate on the Response if they're provided."
    #"Answer the question and use any available context or related topics if they are available"
    + " Question: '{}'".format(example['input'].strip())
    + " Context: {}".format(example['instruction'].strip())
    + " Topics: {}".format(example['topics'])
    + " Response: '{}'".format(example['output'].strip())
    ) 
  return example

More details can be found in the Ask2Democracy GitHub