Senior Research Engineer at Lightning AI
-
Lightning AI
- Switzerland
- @adrianwaelchli
Block or Report
Block or report awaelchli
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
Lightning-AI/lightning Public
Deep learning framework to train, deploy, and ship AI products Lightning fast.
-
pytorch-lightning-snippets Public
A collection of code snippets for my PyTorch Lightning projects
-
-
3,903 contributions in the last year
Less
More
Activity overview
Contributed to
Lightning-AI/lightning,
Lightning-AI/lit-llama,
Lightning-Universe/lightning-quick-start
and 46 other
repositories
Contribution activity
March 2023
Created 2 repositories
- awaelchli/lightning-quick-start Python
- awaelchli/animations TypeScript
Created a pull request in Lightning-AI/lightning that received 3 comments
Fix num_nodes not set for DDPFullyShardedNativeStrategy
What does this PR do?
Fixes #17028
This bug only affects the DDPFullyShardedNativeStrategy, because all others have num_nodes defined as a public s…
+31
−3
•
3
comments
Opened 56 other pull requests in 7 repositories
Lightning-AI/lightning
30
merged
2
open
2
closed
- Fix rich import error in Google Colab
- Ignore README in pre-commit rules
- Remove the PL app
- Exclude some examples from docs navigation
-
Remove
pip install pytorch-lightningwarning in docs - Add link to past versions in the docs header
- Update introduction video
- Prepare 2.0.0 release
- Improve the error message for installing tensorboardx
- Sort the arguments in the Trainer docs
-
Update
LightningModule.all_gatherdocs - Organize app examples
- Update links to latest PL docs
-
Use base version check before calling
_register_load_state_dict_pre_hook - Force keyword-only usage to init Fabric
- Sort Trainer arguments based on importance
- Add cute teaser animations to Fabric docs
- Revert "Use base version when comparing torch versions"
- Fix failing example in Fabric CI
- Fix race condition in Fabric test
- Make BYOT imports forward compatible
- Update Fabric docs with installation instructions
- Make examples runnable
- Miscellaneous updates in Fabric docs
-
Add test for
torch.compile()withFabric.setup() - Some pull requests not shown.
Lightning-AI/lit-llama
1
open
12
merged
- WIP: Alpaca LoRA finetuning + quantization (3/n)
- Alpaca LoRA finetuning (2/n)
- Alpaca finetuning with LoRA 1/n
- Fix time display in generate.py
- Create tokenizer model for Shakespeare
- Speed up quantization in generate.py
- Simplify generate.py
- Relative paths and model configuration
- Initial README
- Fix parity between our model and original
- Script to convert Meta checkpoints to ours
- Basic training script for LLaMA
- Initial LLAMA inference
Lightning-AI/tutorials
3
merged
1
open
Lightning-Universe/lightning_sphinx_theme
2
merged
Lightning-Universe/lightning-quick-start
1
open
Lightning-AI/utilities
1
merged
awaelchli/openfold
1
open
Reviewed 179 pull requests in 4 repositories
Lightning-AI/lightning
25 pull requests
- [TPU] v4 support
-
Simplify
rank_zero_experimentfortorch.compilesupport - feat: customize gradio components with lightning colors
- GPU suggestion does not require devices anymore
- Skip length checks for non-sized iterables
-
Avoid
inference_modewithtorch.compile - ci: update runner for IPU
-
Generalize
Optimizervalidation to accommodate both FSDP 1.x and 2.x - requires onnx & ci adjustment
- Fix a typo in logger arg in trainer.py
- Patch release v2.0.1
- ci/docs: wheels from cache
- Improve CLI output
- ci: building LTS docs
- ci: fix docs with caches
- docs: fix links
- ci: update runner for IPU
- fix typo in docs migration 1_6_regular
-
test: adjust
is_timing_close - docs: update links to 1.6 1.5 1.4
- ci: separate integrations
- Update fastapi dependency pins
- Updated conda install commands in docs.
-
Support all
CombinedLoadermodes during evaluation - Add check for bf16 in deepspeed inference
- Some pull request reviews not shown.
Lightning-AI/lit-llama
25 pull requests
- Alpaca LoRA finetuning (2/n)
- Alpaca finetuning with LoRA 1/n
- Add LoRA
- Fix tests without pip install
- Speed up quantization in generate.py
-
Add
generate.pyandprepare_shakespeare.pytests - Rework repo structure
- Create tokenizer model for Shakespeare
- Mypy and CPU workflows
- Simplify generate.py
- Relative paths and model configuration
- Fix prepare_shakespeare import
- Initial README
- bitsandbytes support
- Disable non-functioning torch.compile
- Re-land #4
- Fix parity between our model and original
- Script to convert Meta checkpoints to ours
- Match eps of original model
- Truncate the rope block size
- Basic training script for LLaMA
- Update to compare against official implementation.
- Add llama implementation based on nanoGPT
- Faster compiled generation concatenation
- Add back Meta's checkpointing conversion. Remove HF's
- Some pull request reviews not shown.
Lightning-AI/tutorials
4 pull requests
Lightning-AI/lightning-ColossalAI
1 pull request
Created an issue in pytorch/pytorch that received 3 comments
FSDP fails to load state dict under inference_mode
torch.inference_mode():
import os import torch
3
comments
Opened 3 other issues in 3 repositories
Lightning-Universe/lightning-quick-start
1
open
Lightning-AI/lightning
1
open
aqlaboratory/openfold
1
open
16
contributions
in private repositories
Mar 4 – Mar 22






