Skip to main content

SketchSampler: Sketch-Based 3D Reconstruction via View-Dependent Depth Sampling

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13661))

Included in the following conference series:

Abstract

Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape. Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch. Through analyzing the 3D-to-2D projection process, we notice that the density map that characterizes the distribution of 2D point clouds (i.e., the probability of points projected at each location of the projection plane) can be used as a proxy to facilitate the reconstruction process. To this end, we first translate a sketch via an image translation network to a more informative 2D representation that can be used to generate a density map. Next, a 3D point cloud is reconstructed via a two-stage probabilistic sampling process: first recovering the 2D points (i.e., the x and y coordinates) by sampling the density map; and then predicting the depth (i.e., the z coordinate) by sampling the depth values at the ray determined by each 2D point. Extensive experiments are conducted, and both quantitative and qualitative results show that our proposed approach significantly outperforms other baseline methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    \(x=\frac{2u}{\mathcal {W}-1}-1\), \(y=\frac{2v}{\mathcal {W}-1}-1\), where \(u=0,1,...,\mathcal {W}-1\), \(v=0,1,...,\mathcal {H}-1\).

References

  1. Bian, W., Wang, Z., Li, K., Prisacariu, V.A.: Ray-ONet: efficient 3D reconstruction from a single RGB image. In: British Machine Vision Conference (BMVC) (2021)

    Google Scholar 

  2. Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)

  3. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  4. Delanoy, J., Aubry, M., Isola, P., Efros, A.A., Bousseau, A.: 3D sketching using multi-view deep volumetric prediction. In: Proceedings of the ACM on Computer Graphics and Interactive Techniques(PACMCGIT) 1(1), 1–22 (2018)

    Google Scholar 

  5. Eitz, M., Hays, J., Alexa, M.: How do humans sketch objects? ACM TOG 31(4), 44:1–44:10 (2012)

    Google Scholar 

  6. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: CVPR (2017)

    Google Scholar 

  7. Gkioxari, G., Malik, J., Johnson, J.: Mesh R-CNN. In: ICCV (2019)

    Google Scholar 

  8. Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  9. Guillard, B., Remelli, E., Yvernay, P., Fua, P.: Sketch2Mesh: reconstructing and editing 3D shapes from sketches. In: ICCV (2021)

    Google Scholar 

  10. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)

    Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2015)

    Google Scholar 

  12. Li, C., Pan, H., Liu, Y., Tong, X., Sheffer, A., Wang, W.: BendSketch: modeling freeform surfaces through 2D sketching. ACM TOG 36(4), 1–14 (2017)

    Google Scholar 

  13. Li, C., Pan, H., Liu, Y., Tong, X., Sheffer, A., Wang, W.: Robust flow-guided neural prediction for sketch-based freeform surface modeling. ACM TOG 37(6), 1–12 (2018)

    Article  Google Scholar 

  14. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)

    Google Scholar 

  15. Lun, Z., Gadelha, M., Kalogerakis, E., Maji, S., Wang, R.: 3D shape reconstruction from sketches via multi-view convolutional networks. In: 3DV (2017)

    Google Scholar 

  16. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: CVPR (2019)

    Google Scholar 

  17. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  18. Nguyen, A.D., Choi, S., Kim, W., Lee, S.: GraphX-convolution for point cloud deformation in 2D-to-3D conversion. In: ICCV (2019)

    Google Scholar 

  19. Popov, S., Bauszat, P., Ferrari, V.: CoReNet: coherent 3D scene reconstruction from a single RGB image. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 366–383. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_22

    Chapter  Google Scholar 

  20. Sangkloy, P., Burnell, N., Ham, C., Hays, J.: The sketchy database: learning to retrieve badly drawn bunnies. ACM TOG 35(4), 1–12 (2016)

    Article  Google Scholar 

  21. Shin, D., Fowlkes, C.C., Hoiem, D.: Pixels, voxels, and views: a study of shape representations for single view 3D object shape prediction. In: CVPR (2018)

    Google Scholar 

  22. Wang, J., Lin, J., Yu, Q., Liu, R., Chen, Y., Yu, S.X.: 3D shape reconstruction from free-hand sketches. arXiv preprint arXiv:2006.09694 (2020)

  23. Wang, L., Qian, C., Wang, J., Fang, Y.: Unsupervised learning of 3D model reconstruction from hand-drawn sketches. In: ACM MM (2018)

    Google Scholar 

  24. Wang, M., Wang, L., Fang, Y.: 3DensiNet: a robust neural network architecture towards 3D volumetric object prediction from 2D image. In: ACM MM (2017)

    Google Scholar 

  25. Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: ECCV (2018)

    Google Scholar 

  26. Wu, J., Wang, Y., Xue, T., Sun, X., Freeman, B., Tenenbaum, J.: MarrNet: 3D shape reconstruction via 2.5D sketches. In: NeurIPS (2017)

    Google Scholar 

  27. Xie, H., Yao, H., Sun, X., Zhou, S., Zhang, S.: Pix2Vox: Context-aware 3D reconstruction from single and multi-view images. In: ICCV (2019)

    Google Scholar 

  28. Xie, H., Yao, H., Zhang, S., Zhou, S., Sun, W.: Pix2vox++: multi-scale context-aware 3D object reconstruction from single and multiple images. IJCV 128(12), 2919–2935 (2020). https://doi.org/10.1007/s11263-020-01347-6

  29. Xu, B., Chang, W., Sheffer, A., Bousseau, A., McCrae, J., Singh, K.: True2Form: 3D curve networks from 2D sketches via selective regularization. ACM TOG 33(4), 1–13 (2014)

    Google Scholar 

  30. Xu, Q., Wang, W., Ceylan, D., Mech, R., Neumann, U.: DISN: deep implicit surface network for high-quality single-view 3D reconstruction. In: NeurIPS (2019)

    Google Scholar 

  31. Zhang, S.H., Guo, Y.C., Gu, Q.W.: Sketch2Model: view-aware 3D modeling from single free-hand sketches. In: CVPR (2021)

    Google Scholar 

  32. Zhong, Y., Gryaditskaya, Y., Zhang, H., Song, Y.Z.: Deep sketch-based modeling: tips and tricks. In: 3DV (2020)

    Google Scholar 

  33. Zhong, Y., Qi, Y., Gryaditskaya, Y., Zhang, H., Song, Y.Z.: Towards practical sketch-based 3d shape generation: the role of professional sketches. IEEE Trans. Circ. Syst. Video Technol. (T-CSVT) 31(9), 3518–3528 (2020)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Natural Science Foundation of China (No. 62002012 and No. 62132001) and Key Research and Development Program of Guangdong Province, China (No. 2019B010154003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qian Yu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 13579 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gao, C., Yu, Q., Sheng, L., Song, YZ., Xu, D. (2022). SketchSampler: Sketch-Based 3D Reconstruction via View-Dependent Depth Sampling. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13661. Springer, Cham. https://doi.org/10.1007/978-3-031-19769-7_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19769-7_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19768-0

  • Online ISBN: 978-3-031-19769-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics