Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow

Author:   Magnus Ekman
Publisher:   Pearson Education (US)
ISBN:  

9780137470358


Pages:   752
Publication Date:   11 October 2021
Format:   Paperback
Availability:   In stock   Availability explained
We have confirmation that this item is in stock with the supplier. It will be ordered in for you and dispatched immediately.

Our Price $197.97 Quantity:  
Add to Cart

Share |

Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow


Add your own review!

Overview

Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. Samples Preview sample pages from Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow > After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning. Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how DL frameworks make it easier to develop more complicated and useful neural networks Discover how convolutional neural networks (CNNs) revolutionise image classification and analysis Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences Master NLP with sequence-to-sequence networks and the Transformer architecture Build applications for natural language translation and image captioning

Full Product Details

Author:   Magnus Ekman
Publisher:   Pearson Education (US)
Imprint:   Addison Wesley
Dimensions:   Width: 18.80cm , Height: 3.20cm , Length: 23.00cm
Weight:   1.100kg
ISBN:  

9780137470358


ISBN 10:   0137470355
Pages:   752
Publication Date:   11 October 2021
Audience:   Professional and scholarly ,  Professional & Vocational
Format:   Paperback
Publisher's Status:   Active
Availability:   In stock   Availability explained
We have confirmation that this item is in stock with the supplier. It will be ordered in for you and dispatched immediately.

Table of Contents

Chapter 1: The Rosenblatt Perceptron Chapter 2: Gradient-Based Learning Chapter 3: Sigmoid Neurons and Backpropagation Chapter 4: Fully Connected Networks Applied to Multiclass Classification Chapter 5: Toward DL: Frameworks and Network Tweaks Chapter 6: Fully Connected Networks Applied to Regression Chapter 7: Convolutional Neural Networks Applied to Image Classification Chapter 8: Deeper CNNs and Pretrained Models Chapter 9: Predicting Time Sequences with Recurrent Neural Networks Chapter 10: Long Short-Term Memory Chapter 11: Text Autocompletion with LSTM and Beam Search Chapter 12: Neural Language Models and Word Embeddings Chapter 13: Word Embeddings from word2vec and GloVe Chapter 14: Sequence-to-Sequence Networks and Natural Language Translation Chapter 15: Attention and the Transformer Chapter 16: One-to-Many Network for Image Captioning Chapter 17: Medley of Additional Topics Chapter 18: Summary and Next Steps

Reviews

Author Information

Magnus Ekman, Ph.D., is a director of architecture at NVIDIA Corporation. His doctorate is in computer engineering, and he is the inventor of multiple patents. He previously worked with processor design and R&D at Sun Microsystems and Samsung Research America, and has been involved in starting two companies, one of which (Skout) was later acquired by The Meet Group, Inc. In his current role at NVIDIA, he leads an engineering team working on CPU performance and power efficiency for system on chips targeting the autonomous vehicle market. As the Deep Learning (DL) field exploded the past few years, fueled by NVIDIA's GPU technology and CUDA, Dr. Ekman found himself in the middle of a company expanding beyond computer graphics into becoming a deep learning (DL) powerhouse. As a part of that journey, he challenged himself to stay up-to-date with the most recent developments in the field. He considers himself to be an educator, and in the process of writing Learning Deep Learning (LDL), he partnered with the NVIDIA Deep Learning Institute (DLI), which offers hands-on training in AI, accelerated computing, and accelerated data science.

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

wl

Shopping Cart
Your cart is empty
Shopping cart
Mailing List