Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 1
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Numerous examples of physically unjustified neural networks, despite satisfactory performance, generate contradictions with logic and lead to many inaccuracies in the final applications. One of the methods to justify the typical black-box model already at the training stage involves extending its cost function by a relationship directly inspired by the physical formula. This publication explains the concept of Physics-guided neural networks (PGNN), makes an overview of already proposed solutions in the field and describes possibilities of implementing physics-based loss functions for spatial analysis. Our approach shows that the model predictions are not only optimal but also scientifically consistent with domain specific equations. Furthermore, we present two applications of PGNNs and illustrate their advantages in theory by solving Poisson’s and Burger’s partial differential equations. The proposed formulas describe various real-world processes and have numerous applications in the area of applied mathematics. Eventually, the usage of scientific knowledge contained in the tailored cost functions shows that our methods guarantee physics-consistent results as well as better generalizability of the model compared to classical, artificial neural networks.
Go to article

Bibliography

  1.  R. Vinuesa et al., “The role of artificial intelligence in achieving the Sustainable Development Goals,” Nat. Commun., vol. 11, no. 1, pp. 1‒10, 2020.
  2.  M. Grochowski, A. Kwasigroch, and A. Mikołajczyk, “Selected technical issues of deep neural networks for image classification purposes,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 67, no. 2, pp. 363–376, 2019.
  3.  T. Poggio and Q. Liao, “Theory II: Deep learning and optimization,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, no. 6, pp. 775–787, 2018.
  4.  A. Lüdeling, M. Walter, E. Kroymann, and P. Adolphs, “Multilevel error annotation in learner corpora,” Proc. Corpus Linguistics Conf., vol. 1, pp. 14–17, 2005.
  5.  A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” Proc. Int. Interdiscipl. PhD Workshop (IIPhDW), 2018, pp. 117–122.
  6.  A.W. Moore and M.S. Lee, “Efficient algorithms for minimizing cross validation error,” Proc. 11th Int’l Conf. Machine Learning, 1994, pp. 190–198.
  7.  J.M. Benitez, J.L. Castro, and I. Requena, “Are artificial neural networks black boxes?,” IEEE Trans. Neural Networks, vol. 8, pp. 1156– 1164, 1997.
  8.  T. Hagendorff, “The ethics of AI ethics: An evaluation of guidelines,” Minds Mach., vol. 30, pp. 99–120, 2020.
  9.  T. Miller, P. Howe, and L. Sonenberg, “Explainable AI: Beware of inmates running the asylum,” Proc. IJCAI Workshop Explainable AI (XAI), 2017, pp. 36–42.
  10.  A. Rai, “Explainable AI: from black box to glass box,” Journal of the Academy of Marketing Science, vol. 48, pp. 137–141, 2020.
  11.  G. Shortley and G. Weller, “Numerical solution of Laplace’s equation,” J. Appl. Phys., vol. 9, no. 1, pp. 334–336, 1938.
  12.  R.C. Mittal and P. Singhal, “Numerical solution of Burger’s equation,” Commun. Numer. Methods Eng., vol. 9, no. 5, pp. 397–406, 1993.
  13.  R. French, “Subcognition and the limits of the Turing test,” Mind, vol. 99, no. 393, pp. 53–65, 1990.
  14.  J. McCarthy, “What is artificial intelligence?,” 1998.
  15.  I. Rojek, M. Macko, D. Mikołajewski, M. Saga, and T. Burczyński, “Modern methods in the field of machine modelling and simulation as a research and practical issue related to industry 4.0,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, no. 2, p. e136717, 2021.
  16.  A. Karpatne, W. Watkins, J. Read, and V. Kumar, “Physicsguided neural networks (PGNN): An application in lake temperature modeling,” 2017, [Online], Available: http://arxiv.orgabs/1710.11431.
  17.  J. Willard et al., “Integrating Physics-Based Modeling with Machine Learning: A Survey,” 2020, [Online], Available: http://arxiv.org/abs/2003.04919.
  18.  X. Jia et al., “Physics guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles,” Proc. SIAM Int. Conf. Data Mining, pp.  558–566, 2019.
  19.  A. Daw et al., “Physics-Guided Architecture (PGA) of neural networks for quantifying uncertainty in lake temperature modeling,” Proc. SIAM Int. Conf. Data Mining, pp.  532–540, 2020.
  20.  Y. Yang and P. Perdikaris, “Physics-informed deep generative models,” 2018, [Online], Available: http://arxiv.org/abs/1812. 03511.
  21.  R. Singh, V. Shah, B. Pokuri, and S. Sarkar, “Physics-aware deep generative models for creating synthetic microstructures,” 2018, [Online], Available: http://arxiv.org/abs/1811.09669.
  22.  L. Wang, Q. Zhou, and S. Jin, “Physics-guided deep learning for power system state estimation,” J. Mod. Power Syst. Clean Energy, vol. 8, no. 4, pp. 607–615, 2020.
  23.  N. Muralidhar et al., “Physics-guided design and learning of neural networks for predicting drag force on particle suspensions in moving fluids,” 2019, [Online], Available: http://arxiv.org/abs/1911.04240.
  24.  J. Park and J. Park, “Physics-induced graph neural network: An application to wind-farm power estimation,” Energy, vol. 187, p. 115883, 2019.
  25.  R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu, “Towards physics-informed deep learning for turbulent flow prediction,” Proc. 26th SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2020, pp. 1457–1466.
  26.  T. Yang et al., “Evaluation and machine learning improvement of global hydrological model-based flood simulations,” Environ. Res. Lett., vol. 14, no. 11, p. 114027, 2019.
  27.  Y. Zhang et al., “Pgnn: Physics-guided neural network for fourier ptychographic microscopy,” 2019, [Online], Available: http://arxiv.org/ abs/1909.08869.
  28.  M.G. Poirot et al., “Physics-informed deep learning for dualenergy computed tomography image processing,” Sci. Rep., vol. 9, no. 1, 2019.
  29.  F. Sahli Costabal, Y. Yang, P. Perdikaris, D. E. Hurtado, E. Kuhl, “Physics-informed neural networks for cardiac activation mapping,” Front. Phys., vol. 8, p. 42, 2020.
  30.  M. Raissi, P. Perdikaris, and G.E. Karniadakis, “Physics informed deep learning (part I): Data-driven solutions of nOnlinear partial dif- ferential equations,” 2017, [Online], Available: http://arxiv.org/abs/1711.10561.
  31.  M. Raissi, P. Perdikaris, and G.E. Karniadakis, “Physics informed deep learning (part II): Data-driven solutions of nOnlinear partial differential equations,” 2017, [Online], Available: http://arxiv.org/abs/1711.10566.
  32.  Z. Fang and J. Zhan, “A physics-informed neural network framework for PDEs on 3D surfaces: Time independent problems,” IEEE Access, vol. 8, pp. 26328–26335, 2019.
  33.  B. Paprocki, A. Pregowska, and J. Szczepanski, “Optimizing information processing in brain-inspired neural networks,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 68, no. 2, pp. 225–233, 2020.
  34.  M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: opportunities and challenges,” Front. Neurosci., vol.  12, pp. 774, 2018.
  35.  Z. Bing et al., “A survey of robotics control based on learninginspired spiking neural networks,” Front. Neurorob., vol. 12, pp. 35, 2018.
  36.  B. Borzyszkowski, “Neuromorphic Computing in High Energy Physics,” 2020, doi: 10.5281/zenodo.3755310.
  37.  J. George, C. Soci, M. Miscuglio, and V. Sorger, “Symmetry perception with spiking neural networks,” Sci. Rep., vol. 11, no.  1. pp. 1–14, 2021.
  38.  K. Janocha and W.M. Czarnecki, “On loss functions for deep neural networks in classification,” 2017, [Online], Available: http://arxiv. org/abs/1702.05659.
  39.  L. Bottou, “Stochastic gradient descent tricks,” in Neural networks: Tricks of the trade, Berlin, Heidelberg: Springer 2012, pp. 421–436.
  40.  T. Dockhorn, “A discussion on solving partial differential equations using neural networks,” 2019, [Online], Available: https://arxiv.org/abs/1904.07200.
  41.  A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth, “Occam’s razor,” Inf. Process. Lett., vol. 24, no. 6, pp.  377–380, 1987.
  42.  A. Marreiros, J. Daunizeau, S. Kiebel, and K. Friston, “Population dynamics: variance and the sigmoid activation function,” Neuroimage, vol. 42, no. 1, pp. 147–157, 2008.
  43.  S.K. Kumar, “On weight initialization in deep neural networks.” 2017, [Online]. Available: http://arxiv.org/abs/1704.08863.
  44.  N. Nawi, M. Ransing, and R. Ransing, “An improved learning algorithm based on the Broyden-fletcher-goldfarb-shanno (BFGS) method for back propagation neural networks,” Proc. 6th Int. Conf. Intell. Syst. Design Appl., 2006, vol. 1, pp. 152–157.
  45.  P. Constantin and C. Foias, “Navier-stokes equations,” Chicago: University of Chicago Press, 1988.
  46.  G.A. Anastassiou, “Multivariate hyperbolic tangent neural network approximation,” Comput. Math. Appl., vol. 61, no. 4, pp. 809–821, 2011.
  47.  B. Hanin and D. Rolnick, “How to start training: The effect of initialization and architecture,” Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 571–581.
  48.  R. Ghanem and P. Spanos, “Stochastic finite elements: a spectral approach,” New York: Springer, 1991.
Go to article

Authors and Affiliations

Bartłomiej Borzyszkowski
1
ORCID: ORCID
Karol Damaszke
1
Jakub Romankiewicz
1
Marcin Świniarski
1
Marek Moszyński
1

  1. Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, ul. G. Narutowicza 11/12, 80-233 Gdańsk, Poland

This page uses 'cookies'. Learn more