The huggingface_hub library offers a range of mixins that can be used as a parent class for your
objects, in order to provide simple uploading and downloading functions.
A Generic Base Model Hub Mixin. Define your own mixin for anything by
inheriting from this class and overwriting _from_pretrained and
_save_pretrained to define custom logic for saving/loading your classes.
See huggingface_hub.PyTorchModelHubMixin for an example.
( pretrained_model_name_or_path: str force_download: bool = False resume_download: bool = False proxies: typing.Dict = None use_auth_token: typing.Optional[str] = None cache_dir: typing.Optional[str] = None local_files_only: bool = False **model_kwargs )
Parameters
str or os.PathLike) —
Can be either:model id of a pretrained model
hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level,
like bert-base-uncased, or namespaced under a
user or organization name, like
dbmdz/bert-base-german-cased.revision by appending @ at the end
of model_id simply like this:
dbmdz/bert-base-german-cased@main Revision is
the specific model version to use. It can be a
branch name, a tag name, or a commit id, since we
use a git-based system for storing models and
other artifacts on huggingface.co, so revision
can be any identifier allowed by git.directory containing model weights
saved using
save_pretrained,
e.g., ./my_model_directory/.None if you are both providing the configuration
and state dictionary (resp. with keyword arguments
config and state_dict).bool, optional, defaults to False) —
Whether to force the (re-)download of the model weights
and configuration files, overriding the cached versions
if they exist.
bool, optional, defaults to False) —
Whether to delete incompletely received files. Will
attempt to resume the download if such a file exists.
Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or
endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are
used on each request.
str or bool, optional) —
The token to use as HTTP bearer authorization for remote
files. If True, will use the token generated when
running transformers-cli login (stored in
~/.huggingface).
Union[str, os.PathLike], optional) —
Path to a directory in which a downloaded pretrained
model configuration should be cached if the standard
cache should not be used.
bool, optional, defaults to False) —
Whether to only look at local files (i.e., do not try to
download the model).
Dict, optional) —
model_kwargs will be passed to the model during
initialization
Instantiate a pretrained PyTorch model from a pre-trained model
configuration from huggingface-hub. The model is set in
evaluation mode by default using model.eval() (Dropout modules
are deactivated). To train the model, you should first set it
back in training mode with model.train().
Passing use_auth_token=True is required when you want to use a
private model.
( repo_path_or_name: typing.Optional[str] = None repo_url: typing.Optional[str] = None commit_message: typing.Optional[str] = 'Add model' organization: typing.Optional[str] = None private: typing.Optional[bool] = None api_endpoint: typing.Optional[str] = None use_auth_token: typing.Union[bool, str, NoneType] = None git_user: typing.Optional[str] = None git_email: typing.Optional[str] = None config: typing.Optional[dict] = None )
Parameters
str, optional) —
Can either be a repository name for your model or tokenizer in
the Hub or a path to a local folder (in which case the
repository will have the name of that local folder). If not
specified, will default to the name given by repo_url and a
local directory with that name will be created.
str, optional) —
Specify this in case you want to push to an existing repository
in the hub. If unspecified, a new repository will be created in
your namespace (unless you specify an organization) with
repo_name.
str, optional) —
Message to commit while pushing. Will default to "add config",
"add tokenizer" or "add model" depending on the type of the
class.
str, optional) —
Organization in which you want to push your model or tokenizer
(you must be a member of this organization).
bool, optional) —
Whether the repository created should be private.
str, optional) —
The API endpoint to use when pushing the model to the hub.
bool or str, optional) —
The token to use as HTTP bearer authorization for remote files.
If True, will use the token generated when running
transformers-cli login (stored in ~/.huggingface). Will
default to True if repo_url is not specified.
str, optional) —
will override the git config user.name for committing and
pushing files to the hub.
str, optional) —
will override the git config user.email for committing and
pushing files to the hub.
dict, optional) —
Configuration object to be saved alongside the model weights.
Upload model checkpoint or tokenizer files to the Hub while
synchronizing a local clone of the repo in repo_path_or_name.
( save_directory: str config: typing.Optional[dict] = None push_to_hub: bool = False **kwargs )
Parameters
str) —
Specify directory in which you want to save weights.
dict, optional) —
specify config (must be dict) in case you want to save
it.
bool, optional, defaults to False) —
Set it to True in case you want to push your weights
to huggingface_hub
Dict, optional) —
kwargs will be passed to push_to_hub
Save weights in local directory.
( *args **kwargs )
Parameters
str or os.PathLike) —
Can be either:model id of a pretrained model hosted inside a
model repo on huggingface.co. Valid model ids can be located
at the root-level, like bert-base-uncased, or namespaced
under a user or organization name, like
dbmdz/bert-base-german-cased.revision by appending @ at the end of model_id
simply like this: dbmdz/bert-base-german-cased@main Revision
is the specific model version to use. It can be a branch name,
a tag name, or a commit id, since we use a git-based system
for storing models and other artifacts on huggingface.co, so
revision can be any identifier allowed by git.directory containing model weights saved using
save_pretrained, e.g.,
./my_model_directory/.None if you are both providing the configuration and state
dictionary (resp. with keyword arguments config and
state_dict).bool, optional, defaults to False) —
Whether to force the (re-)download of the model weights and
configuration files, overriding the cached versions if they exist.
bool, optional, defaults to False) —
Whether to delete incompletely received files. Will attempt to
resume the download if such a file exists.
Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The
proxies are used on each request.
str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If
True, will use the token generated when running transformers-cli login (stored in ~/.huggingface).
Union[str, os.PathLike], optional) —
Path to a directory in which a downloaded pretrained model
configuration should be cached if the standard cache should not be
used.
bool, optional, defaults to False) —
Whether to only look at local files (i.e., do not try to download
the model).
Dict, optional) —
model_kwargs will be passed to the model during initialization
Instantiate a pretrained Keras model from a pre-trained model from the Hub. The model is expected to be in SavedModel format.```
Passing use_auth_token=True is required when you want to use a private
model.
( model repo_path_or_name: typing.Optional[str] = None repo_url: typing.Optional[str] = None log_dir: typing.Optional[str] = None commit_message: typing.Optional[str] = 'Add model' organization: typing.Optional[str] = None private: typing.Optional[bool] = None api_endpoint: typing.Optional[str] = None use_auth_token: typing.Union[bool, str, NoneType] = True git_user: typing.Optional[str] = None git_email: typing.Optional[str] = None config: typing.Optional[dict] = None include_optimizer: typing.Optional[bool] = False task_name: typing.Optional[str] = None plot_model: typing.Optional[bool] = True **model_save_kwargs )
Parameters
Keras.Model) —
The Keras
model
you’d like to push to the Hub. The model must be compiled and built.
str, optional) —
Can either be a repository name for your model or tokenizer in the
Hub or a path to a local folder (in which case the repository will
have the name of that local folder). If not specified, will default
to the name given by repo_url and a local directory with that name
will be created.
str, optional) —
Specify this in case you want to push to an existing repository in
the Hub. If unspecified, a new repository will be created in your
namespace (unless you specify an organization) with repo_name.
str, optional) —
TensorBoard logging directory to be pushed. The Hub automatically
hosts and displays a TensorBoard instance if log files are included
in the repository.
str, optional, defaults to “Add message”) —
Message to commit while pushing.
str, optional) —
Organization in which you want to push your model or tokenizer (you
must be a member of this organization).
bool, optional) —
Whether the repository created should be private.
str, optional) —
The API endpoint to use when pushing the model to the hub.
bool or str, optional, defaults to True) —
The token to use as HTTP bearer authorization for remote files. If
True, will use the token generated when running transformers-cli login (stored in ~/.huggingface). Will default to True.
str, optional) —
will override the git config user.name for committing and pushing
files to the Hub.
str, optional) —
will override the git config user.email for committing and pushing
files to the Hub.
dict, optional) —
Configuration object to be saved alongside the model weights.
bool, optional, defaults to False) —
Whether or not to include optimizer during serialization.
str, optional) —
Name of the task the model was trained on. Available tasks
here.
bool, optional, defaults to True) —
Setting this to True will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
dict, optional) —
model_save_kwargs will be passed to
tf.keras.models.save_model().
Upload model checkpoint or tokenizer files to the Hub while synchronizing a
local clone of the repo in repo_path_or_name.
( model save_directory: str config: typing.Union[typing.Dict[str, typing.Any], NoneType] = None include_optimizer: typing.Optional[bool] = False plot_model: typing.Optional[bool] = True task_name: typing.Optional[str] = None **model_save_kwargs )
Parameters
Keras.Model) —
The Keras
model
you’d like to save. The model must be compiled and built.
str) —
Specify directory in which you want to save the Keras model.
dict, optional) —
Configuration object to be saved alongside the model weights.
bool, optional, defaults to False) —
Whether or not to include optimizer in serialization.
bool, optional, defaults to True) —
Setting this to True will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
str, optional) —
Name of the task the model was trained on. Available tasks
here.
dict, optional) —
model_save_kwargs will be passed to
tf.keras.models.save_model().
Saves a Keras model to save_directory in SavedModel format. Use this if you’re using the Functional or Sequential APIs.
Mixin to provide model Hub upload/download capabilities to Keras models. Override this class to obtain the following internal methods:
_from_pretrained, to load a model from the Hub or from local files._save_pretrained, to save a model in the SavedModel format.