Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign upHighlights
- Pro
Pinned
1,132 contributions in the last year
Contribution activity
July 2020
Created a pull request in huggingface/transformers that received 4 comments
[Fix] github actions CI by reverting #5138
On github actions, even when the tests fail the job is green, presumably because of my artifacts change( #5318 ). I am not clear on why this happen…
+2
−16
•
4
comments
- [fix] Style. Trying again
- [fix] check_code_quality
- [fix] check code quality
- [cleanup] T5 test, warnings
- T5 Model Cards
- [fix] mbart_en_ro_generate test now identical to fairseq
- Cleanup bart caching logic
- [mbart] prepare_translation_batch passes **kwargs to allow DeprecationWarning
- [wip] Label smooth
- [fix] pin sacrebleu to fix CI ImportError
- [Bart] enable test_torchscript, update test_tie_weights
- [fix] Marian tests import
- Ensure OpenAI GPT position_ids is correctly initialized and registered at init.
- [fix] T5 ONNX test: model.to(torch_device)
- [cleanup] T5 test, warnings
- [Reformer classification head] Implement the reformer model classification head for text classification
- docs(wandb): explain how to use W&B integration
- [Don't merge - Bert2Bert] Add training scripts and slight changes to Trainer
- [pipelines] Update fill mask pipeline to remove special tokens in the output
- fix incorrect docstring on bart summarization example
- Pipeline model type check
- Generate up to max_target_length sequences
- Cleanup bart caching logic
- Change model outputs types to self-document outputs
- Test XLA examples
- QA pipeline BART compatible
- Add mbart-large-cc25, support translation finetuning
- [WIP] update pl=0.8.5
- [Generation] better error message
- [Bart] enable test_torchscript, update test_tie_weights
- Refactor generation sampling parameters (e.g. top k, temperature) into "Sampling" classes
Created an issue in facebookresearch/ParlAI that received 14 comments
How long should eval_model.py -t blended_skill_talk -m zoo/blender_90 take?
When I run:
python parlai/scripts/eval_model.py -t blended_skill_talk \ -mf zoo:blender/blender_90M/model --metrics ppl
I get output:
14:57:01 INFO…
14
comments
- Faster mBART finetuning
- [bart] decoder.last_hidden_state shape changes when passing labels
- t5 model card
- T5 ONNX Export Test Failing on GPU
- Fix slow test_enro_generate
- MBARTTokenizer set_lang logic will only work for src_lang=en_XX
- Seq2Seq: Option to not store whole dataset in memory
- 35 Model Hub entries fail AutoConfig
- TF: inputs vs input_ids
8
contributions
in private repositories
Jul 1 – Jul 14