📚 A practical approach to machine learning to enable everyone to learn, explore and build.
-
Updated
Dec 20, 2019 - Jupyter Notebook
📚 A practical approach to machine learning to enable everyone to learn, explore and build.
The fastai deep learning library, plus lessons and tutorials
With the latest version of scipy.misc, scipy.misc.toimage is no longer available. To load and save an image as png we now have to use PIL, breaking tensorboard image summary.
Here is how I fixed the bug:
1./ At the end of main.py, log a uint8 image
logger.image_summary(tag, (images * 255).astype(np.uint8), step+1)
2./ In Logger class, package image as bytes with the PIL library (mode="L
Clone a voice in 5 seconds to generate arbitrary speech in real-time
should I scale the multi-gpu lr to batchsize * single-gpu lr?
e.g. pix2pix model default lr = 0.0002(1 batchsize 1 gpu). When I use 16 batchsize and four gpu(4 image per gpu),I think we should set lr = 0.0002 * 16 = 0.0032.
Meanwhile, using WarmupScheduler to avoid gradient exploding problem.
thanks for any help.
pytorch handbook是一本开源的书籍,目标是帮助那些希望和使用PyTorch进行深度学习开发和研究的朋友快速入门,其中包含的Pytorch教程全部通过测试保证可以成功运行
A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc.
Judging by the logic in https://github.com/horovod/horovod/blob/38e91bee84efbb5b563a4928027a75dc3974633b/setup.py#L1369 it is clear, that before installing Horovod one needs to install the underlying framework(s) (TensorFlow, PyTorch, ...).
This is not mentioned in the installation instructions which made me think, I can install Horovod and then any framework I like (or switch between them) and
Train a simple NER tagger for Swedish trained for instance over this dataset.
For this task, we need to adapt the NLPTaskDataFetcher for the appropriate Swedish dataset and train a simple model using Swedish word embeddings. How to train a model is [illustrated here](https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_TRAI
As the title, there is some confusion in CrossEntropyLoss. For example, the weight used in forward() (L#95) is the label for mask_cross_entropy ([#L
Currently, the EndpointSpanExtractor will happily take input that doesn't match its passed-in input_dim if the exclusive span indices are not being used. Feels like there should maybe be a check somewhere for this...https://github.com/allenai/allennlp/blob/master/allennlp/modules/span_extractors/endpoint_span_extractor.py
Support for storing large tensor values in external files was introduced in #678, but AFAICT is undocumented.
This is a pretty important feature, functionally, but it's also important for end users who may not realise that they need to move around more than just the *.onnx file.
I would suggest it should be documented in IR.md, and perhaps there are other locations from which it could be s
Visualizer for neural network, deep learning and machine learning models
Hi! I tried running generate to evaluate transformer.wmt14.en-fr on the WMT'14 test set but was only able to get a BLEU score of 35.42. I ran prepare-wmt14en2fr.sh and fairseq-preprocess on the data beforehand as well. Could you share the command for evaluating the Transformer ENFR WMT'14 model?
Here is what I'm using:
fairseq-generate data-bin/wmt14_en_fr/ \
--path checkpoin
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation
深度学习入门开源书,基于TensorFlow 2.0案例实战。Open source Deep Learning book, based on TensorFlow 2.0 framework.
本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。
nan values in loss
Steps to reproduce the behavior:
python examples/qm9_nnconv.pyEpoch: 001, LR: 0.001000, Loss: nan, Validation MAE: nan, Test MAE: nan
Epoch: 002, LR: 0.001000, Loss: nan, Validation MAE: nan, Test MAE: nan
Epoch: 003, LR: 0.001000, Loss: nan, Validation MAE: nan, Test MAE: nan
...Exp
I noticed https://pypi.org/project/tensorboardX/ release 1.7 specifies the ability to send "dataformats" as a parameter for videos, but I checked writer.py and summary.py and found this to not be true (it is, however, true for images). I'd be down to include support for "CHW" and "HWC" shaped tensors.
Similar to the tutorial on custom losses in SVI, we should have a tutorial on implementing custom MCMC kernels using the new MCMC API. Something simple like SGLD seems like a good starting point.
Thanks for this valuable resource! if its possible could you also add the complexity or number of parameters in the table in the README.md
Also, have you looked at ShuffleNet v2? Here is a really good implementation of it https://github.com/ericsun99/Shufflenet-v2-Pytorch
I'm building an edited version of the tensorflow-py36-cuda90 dockerfile where I pip install some more packages
# ==================================================================
# module list
# ------------------------------------------------------------------
# python 3.6 (apt)
# tensorflow latest (pip)
# ================================================================
Natural Language Processing Tutorial for Deep Learning Researchers
A list of popular github projects related to deep learning
when I read the code, I hava a question in lib/model/roi_align/src/roi_align.c [line:175]
// bilinear interpolation
if (h < 0 || h >= height || w < 0 || w >= width)
{
float h_ratio = h - (float)(hstart);
float w_ratio = w - (float)(wstart);
Platform (like ubuntu 16.04/win10): Windows 10
Python version: 3.7.4, mmdnn==0.2.5
Running scripts: mmconvert -f caffe -df keras -om test
I know that this command is not supposed to run without passing an input file, but the error message is incorrect and should be improved:
mmconvert: error: argument --srcFramework/-f: invalid choice: 'None' (choose from 'caffe', 'caffe2', 'cn
when I change the size of input image, may I need to change the value of "g_conv_dim" and "d_conv_dim"
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
This is a documentation-related bug. In the TransfoXL documentation, the tokenization example is wrong. The snippet goes:
This code output