Popular repositories
-
-
-
-
-
Forked from fastai/fastai
The fast.ai deep learning library, lessons, and tutorials
2,639 contributions in the last year
Less
More
Activity overview
Contributed to
huggingface/transformers,
huggingface/accelerate,
fastai/fastbook
and 5 other
repositories
Contribution activity
April 2021
Created 52 commits in 4 repositories
Created a pull request in huggingface/transformers that received 9 comments
Replace error by warning when loading an architecture in another
What does this PR do?
#10586 introduced a breaking change by mistake by removing the possibility to do something like:
from transformers import Ber…
+16
−15
•
9
comments
Opened 25 other pull requests in 4 repositories
huggingface/transformers
21
merged
1
closed
- Trainer support for IterableDataset for evaluation and predict
- Support for set_epoch in IterableDataset
- Trainer iterable dataset
- Fix #10128
- Tokenizer fast save
- Indent code block in the documentation
- Avoid using no_sync on SageMaker DP
- Make sure code blocks are indented with four spaces
- Doc check: a bit of clean up
-
Make
get_special_tokens_maskconsider all tokens - Add support for multiple models for one config in auto classes
- Don't duplicate logs in TensorBoard and handle --use_env
- Fix and refactor check_repo
- Some styling of the training table in Notebooks
- Dummies multi backend
- Auto feature extractor
- Make a base init in FeatureExtractionMixin
- Fix distributed gather for tuples of tensors of varying sizes
- Document common config attributes
- Add center_crop to ImageFeatureExtractionMixin
- Refactor AutoModel classes and add Flax Auto classes
- Add a script to check inits are consistent
huggingface/blog
1
merged
huggingface/tokenizers
1
open
huggingface/datasets
1
merged
Reviewed 65 pull requests in 3 repositories
huggingface/transformers 61 pull requests
- [debug utils] activation overflow detector
- Add LUKE
- New TF examples
- Adding pipeline task aliases.
- Add prefix to examples in model_doc rst
- [troubleshooting] add 2 points of reference to the offline mode
- Tokenizer fast save
- Indent code block in the documentation
- Refactor GPT2
- Doc check: a bit of clean up
- added cache_dir=model_args.cache_dir to all example with cache_dir arg
- [WIP] FSMT bart-like refactor
- fix docs for decoder_input_ids
- Fix GPT-2 warnings
- Add documentation for BertJapanese
- Added translation example script
- model_path should be ignored as the checkpoint path
- Minor typos fixed
- [examples/translation] support mBART-50 and M2M100 fine-tuning
- [examples run_clm] fix _LazyModule hasher error
- Add support for multiple models for one config in auto classes
- [setup] make fairscale and deepspeed setup extras
- [tests] relocate core integration tests
- [setup] extras[docs] must include 'all'
- [trainer] solve "scheduler before optimizer step" warning
- Some pull request reviews not shown.