Block or Report
Block or report jianyuh
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
-
NervanaSystems/neon Public
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
-
-
-
flame/fmm-gen Public
Generating Families of Practical Fast Matrix Multiplication Algorithms
-
pytorch/pytorch Public
Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
pytorch/FBGEMM Public
FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
392 contributions in the last year
Less
More
Activity overview
Contribution activity
December 2022
Created 13 commits in 1 repository
Created a pull request in pytorch/FBGEMM that received 15 comments
Add BF16 output support for inference TBE
Summary: As title Reviewed By: jiecaoyu Differential Revision: D41835847
+327
−4
•
15
comments
Opened 12 other pull requests in 1 repository
pytorch/FBGEMM
5
open
7
closed
- Change update_row_idx data type from int32_t to int64_t
- Back out "deprecate the inplace update op in fbgemm_gpu/fb"
- deprecate the inplace update op in fbgemm_gpu/fb
- Fix illegal memory access issue caused by int32_t representable ranges
- deprecate the inplace update op in fbgemm_gpu/fb
- Fix the leftover test case for output_dtype
- Add BF16 output support for inference TBE
- Enable UvmCacheStats collection for training
- Enable UvmCacheStats collection for training
- small fix to test CI failure on the trunk
- Follow up on BC issue for open sourcing TBE inplace update op
- Change from ubuntu latest (22.04) to ubuntu 20.04


