The introductory documentation, which I am reading (TOC here) uses the term "batch" (for instance here) without having defined it.
Related Questions in TENSORFLOW
- Delay in loading Html Page(WebView) from assets folder in real android device
- MPAndroidChart method setWordWrapEnabled() not found
- Designing a 'new post' android activity
- Android :EditText inside ListView always update first item in the listview
- Android: Transferring Data via ContentIntent
- Wrong xml being inflated android
- AsyncTask Class
- Unable to receive extras in Android Intent
- Website zoomed out on Android default browser
- Square FloatingActionButton with Android Design Library
Related Questions in MACHINE-LEARNING
- Delay in loading Html Page(WebView) from assets folder in real android device
- MPAndroidChart method setWordWrapEnabled() not found
- Designing a 'new post' android activity
- Android :EditText inside ListView always update first item in the listview
- Android: Transferring Data via ContentIntent
- Wrong xml being inflated android
- AsyncTask Class
- Unable to receive extras in Android Intent
- Website zoomed out on Android default browser
- Square FloatingActionButton with Android Design Library
Related Questions in NEURAL-NETWORK
- Delay in loading Html Page(WebView) from assets folder in real android device
- MPAndroidChart method setWordWrapEnabled() not found
- Designing a 'new post' android activity
- Android :EditText inside ListView always update first item in the listview
- Android: Transferring Data via ContentIntent
- Wrong xml being inflated android
- AsyncTask Class
- Unable to receive extras in Android Intent
- Website zoomed out on Android default browser
- Square FloatingActionButton with Android Design Library
Related Questions in DEEP-LEARNING
- Delay in loading Html Page(WebView) from assets folder in real android device
- MPAndroidChart method setWordWrapEnabled() not found
- Designing a 'new post' android activity
- Android :EditText inside ListView always update first item in the listview
- Android: Transferring Data via ContentIntent
- Wrong xml being inflated android
- AsyncTask Class
- Unable to receive extras in Android Intent
- Website zoomed out on Android default browser
- Square FloatingActionButton with Android Design Library
Related Questions in TENSOR
- Delay in loading Html Page(WebView) from assets folder in real android device
- MPAndroidChart method setWordWrapEnabled() not found
- Designing a 'new post' android activity
- Android :EditText inside ListView always update first item in the listview
- Android: Transferring Data via ContentIntent
- Wrong xml being inflated android
- AsyncTask Class
- Unable to receive extras in Android Intent
- Website zoomed out on Android default browser
- Square FloatingActionButton with Android Design Library
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Popular Tags
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference), compute the loss, compute the gradient, and then update the parameters of your network (i.e. weights and biases) and then proceed with the next image ... This way of training the model is sometimes called as online learning.
But, you want the training to be faster, the gradients to be less noisy, and also take advantage of the power of GPUs which are efficient at doing array operations (nD-arrays to be specific). So, what you instead do is feed in say 100 images at a time (the choice of this size is up to you (i.e. it's a hyperparameter) and depends on your problem too). For instance, take a look at the below picture, (Author: Martin Gorner)
Here, since you're feeding in 100 images(
28x28
) at a time (instead of 1 as in the online training case), the batch size is 100. Oftentimes this is called as mini-batch size or simplymini-batch
.Also the below picture: (Author: Martin Gorner)
Now, the matrix multiplication will all just work out perfectly fine and you will also be taking advantage of the highly optimized array operations and hence achieve faster training time.
If you observe the above picture, it doesn't matter that much whether you give 100 or 256 or 2048 or 10000 (batch size) images as long as it fits in the memory of your (GPU) hardware. You'll simply get that many predictions.
But, please keep in mind that this batch size influences the training time, the error that you achieve, the gradient shifts etc., There is no general rule of thumb as to which batch size works out best. Just try a few sizes and pick the one which works best for you. But try not to use large batch sizes since it will overfit the data. People commonly use mini-batch sizes of
32, 64, 128, 256, 512, 1024, 2048
.Bonus: To get a good grasp of how crazy you can go with this batch size, please give this paper a read: weird trick for parallelizing CNNs