llama3.1_8b / README.md
Divyasreepat's picture
Upload folder using huggingface_hub
4d6e52c verified
|
raw
history blame
972 Bytes
metadata
library_name: keras-hub
pipeline_tag: text-generation

This is a Llama3 model uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: llama3_backbone
  • trainable: True
  • vocabulary_size: 128256
  • num_layers: 32
  • num_query_heads: 32
  • hidden_dim: 4096
  • intermediate_dim: 14336
  • rope_max_wavelength: 500000.0
  • rope_position_scaling_factor: 1.0
  • rope_frequency_adjustment_factor: 8.0
  • rope_low_freq_factor: 1.0
  • rope_high_freq_factor: 4.0
  • rope_pretraining_sequence_length: 8192
  • num_key_value_heads: 8
  • layer_norm_epsilon: 1e-06
  • dropout: 0

This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.