Accelerating MATLAB Performance : 1001 Tips to Speed up MATLAB Programs
Accelerating MATLAB Performance : 1001 Tips to Speed up MATLAB Programs
Click to enlarge
Author(s): Altman, Yair M.
ISBN No.: 9781482211290
Pages: 785
Year: 201502
Format: Trade Cloth (Hard Cover)
Price: $ 151.51
Dispatch delay: Dispatched between 7 to 15 days
Status: Available

1. Introduction to Performance Tuning 1.1 Why Should We Bother? 1.2 When to Performance-Tune and When Not to Bother 1.3 The Iterative Performance Tuning Cycle 1.3.1 Pareto¿s Principle and the Law of Diminishing Returns 1.3.


2 When to Stop Tuning 1.3.3 Periodic Performance Maintenance 1.4What to Tune 1.5Performance Tuning Pitfalls 1.5.1 When to Tune 1.5.


2 Performance Goals 1.5.3 Profiling 1.5.4 Optimization 1.6 Performance Tuning Tradeoffs 1.7 Vertical versus Horizontal Scaling 1.8 Perceived versus Actual Performance 1.


8.1 Presenting Continuous Feedback for Ongoing Tasks 1.8.2 Placing the User in Control 1.8.3 Enabling User Interaction during Background Processing 1.8.4 Streaming Data as it Becomes Available 1.


8.5 Streamlining the Application 1.8.6 Reducing the Run-Time Variability 1.8.7 Performance and Real Time 2. Profiling MATLAB¿ Performance 2.1 The MATLAB Profiler 2.


1.1 The Detailed Profiling Report 2.1.2 A Sample Profiling Session 2.1.3 Programmatic Access to Profiling Data 2.1.4 Function-Call History Timeline 2.


1.5 CPU versus Wall-Clock Profiling 2.1.6 Profiling Techniques 2.1.6.1 Relative versus Absolute Run Times 2.1.


6.2 Ensuring Profiling Consistency 2.1.6.3 Ensuring Compatibility with Real-World Conditions 2.1.6.4 Profiling GUI and I/O 2.


1.6.5 Code Coverage 2.1.7 Profiling Limitations 2.1.8 Profiling and MATLAB¿s JIT 2.2 tic, toc and Relatives 2.


2.1 The Built-In tic, toc Functions 2.2.2 Comparison between the Profiler and tic, toc 2.2.3 Related Tools 2.3 Timed Log Files and Printouts 2.4 Non-MATLAB Tools 3.


Standard Performance-Tuning Techniques 3.1 Loop Optimization 3.1.1 Move Loop-Invariant Code Out of the Loop 3.1.1.1 A Simple Example 3.1.


1.2 I/O and Memory-Related Invariants 3.1.1.3 Subexpression Hoisting 3.1.1.4 Loop Conditionals 3.


1.1.5 Invoked Functions 3.1.2 Minimize Function Call Overheads 3.1.3 Employ Early Bail-Outs 3.1.


4 Simplify Loop Contents 3.1.5 Unroll Simple Loops 3.1.6 Optimize Nested Loops 3.1.7 Switch the Order of Nested Loops 3.1.


8 Minimize Dereferencing 3.1.9 Postpone I/O and Graphics Until the Loop Ends 3.1.10 Merge or Split Loops 3.1.11 Loop Over the Shorter Dimension 3.1.


12 Run Loops Backwards 3.1.13 Partially Optimize a Loop 3.1.14 Use the Loop Index Rather than Counters 3.1.15 MATLAB¿s JIT 3.2 Data Caching 3.


2.1 Read-Only Caches 3.2.2 Common Subexpression Elimination 3.2.3 Persistent Caches 3.2.3.


1 In-Memory Persistence 3.2.3.2 Non-Memory Persistence 3.2.4 Writable Caches 3.2.4.


1 Initializing Cache Data 3.2.4.2 Memoization 3.2.4.3 Multilayered (Offline) Cache 3.2.


5 A Real-Life Example: Writable Cache 3.2.6 Optimizing Cache Fetch Time 3.3 Smart Checks Bypass 3.4 Exception Handling 3.5 Improving Externally Connected Systems 3.5.1 Database 3.


5.1.1 Design 3.5.1.2 Storage 3.5.1.


3 Indexing 3.5.1.4 Driver and Connection 3.5.1.5 SQL Queries 3.5.


1.6 Data Updates 3.5.2 File System and Network 3.5.3 Computer Hardware 3.6 Processing Smaller Data Subsets 3.6.


1 Reading from a Database 3.6.2 Reading from a Data File 3.6.3 Processing Data 3.7 Interrupting Long-Running Tasks 3.8 Latency versus Throughput 3.8.


1 Lazy Evaluation 3.8.2 Prefetching 3.9 Data Analysis 3.9.1 Preprocessing the Data 3.9.2 Controlling the Target Accuracy 3.


9.3 Reducing Problem Complexity 3.10 Other Techniques 3.10.1 Coding 3.10.1.1 Recursion 3.


10.1.2 Using Known Computational Identities 3.10.1.3 Remove Unnecessary Computations ("Dead-Code" Elimination) 3.10.1.


4 Optimize Conditional Constructs 3.10.1.5 Use Short-Circuit Conditionals (Smartly!) 3.10.1.6 Multiply Rather than Divide (or Not) 3.10.


2 Data 3.10.2.1 Optimize the Processed Data 3.10.2.2 Select Appropriate Data Structures 3.10.


2.3 Utilize I/O Data Compression 3.10.3 General 3.10.3.1 Reduce System Interferences 3.10.


3.2 Self-Tuning 3.10.3.3 Jon Bentley¿s Rules 4. MATLAB¿-Specific Techniques 4.1 Effects of Using Different Data Types 4.1.


1 Numeric versus Nonnumeric Data Types 4.1.2 Nondouble and Multidimensional Arrays 4.1.3 Sparse Data 4.1.4 Modifying Data Type in Run Time 4.1.


5 Concatenating Cell Arrays 4.1.6 Datasets, Tables, and Categorical Arrays 4.1.7 Additional Aspects 4.2 Characters and Strings 4.2.1 MATLAB¿s Character/Number Duality 4.


2.2 Search and Replace 4.2.3 Converting Numbers to Strings (and Back) 4.2.4 String Comparison 4.2.5 Additional Aspects 4.


2.5.1 Deblanking 4.2.5.2 Concatenating Strings 4.2.5.


3 Converting Java Strings into MATLAB 4.2.5.4 Internationalization 4.3 Using Internal Helper Functions 4.3.1 A Sample Debugging Session 4.4 Date and Time Functions 4.


5 Numeric Processing 4.5.1 Using inf and NaN 4.5.2 Matrix Operations 4.5.3 Real versus Complex Math 4.5.


4 Gradient 4.5.5 Optimization 4.5.6 Fast Fourier Transform 4.5.7 Updating the Math Libraries 4.5.


8 Random Numbers 4.6 Functional Programming 4.6.1 Invoking Functions 4.6.1.1 Scripts versus Functions 4.6.


1.2 Function Types 4.6.1.3 Input and Output Parameters 4.6.1.4 Switchyard Functions Dispatch 4.


6.2 onCleanup 4.6.3 Conditional Constructs 4.6.4 Smaller Functions and M-files 4.6.5 Effective Use of the MATLAB Path 4.


6.6 Overloaded Built-In MATLAB Functions 4.7 Object-Oriented MATLAB 4.7.1 Object Creation 4.7.2 Accessing Properties 4.7.


3 Invoking Methods 4.7.4 Using System Objects 4.8 MATLAB Start-Up 4.8.1 The MATLAB Startup Accelerator 4.8.2 Starting MATLAB in Batch Mode 4.


8.3 Slow MATLAB Start-Up 4.8.4 Profiling MATLAB Start-Up 4.8.5 Java Start-Up 4.9 Additional Techniques 4.9.


1 Reduce the Number of Workspace Variables 4.9.2 Loop Over the Smaller Data Set 4.9.3 Referencing Dynamic Struct Fields and Object Properties 4.9.4 Use Warning with a Specific Message ID 4.9.


5 Prefer num2cell Rather than mat2cell 4.9.6 Avoid Using containers.Map 4.9.7 Use the Latest MATLAB Release and Patches 4.9.8 Use is* Functions Where Available 4.


9.9 Specify the Item Type When Using ishghandle or exist 4.9.10 Use Problem-Specific Tools 4.9.11 Symbolic Arithmetic 4.9.12 Simulink 4.


9.13 Mac OS 4.9.14 Additional Ideas 5. Implicit Parallelization (Vectorization and Indexing) 5.1 Introduction to MATLAB Vectorization 5.1.1 So What Exactly is MATLAB Vectorization? 5.


1.2 Indexing Techniques 5.1.3 Logical Indexing 5.2 Built-In Vectorization Functions 5.2.1 Functions for Common Indexing Usage Patterns 5.2.


2 Functions That Create Arrays 5.2.3 Functions That Accept Vectorized Data 5.2.3.1 reshape 5.2.4 Functions That Apply Another Function in a Vectorized Manner 5.


2.4.1 arrayfun, cellfun, spfun, and structfun 5.2.4.2 bsxfun 5.2.5 Set-Based Functions 5.


3 Simple Vectorization Examples 5.3.1 Trivial Transformations 5.3.2 Partial Data Summation 5.3.3 Thresholding 5.3.


4 Cumulative Sum 5.3.5 Data Binning 5.3.6 Using meshgrid and bsxfun 5.3.7 A meshgrid Variant 5.3.


8 Euclidean Distances 5.3.9 Range Search 5.3.10 Matrix Computations 5.4 Repetitive Data 5.4.1 A Simple Example 5.


4.2 Using repmat Replacements 5.4.3 Repetitions of Internal Elements 5.5 Multidimensional Data 5.6 Real-Life Example: Synthetic Aperture Radar Matched Filter 5.6.1 Na¿ Approach 5.


6.2 Using Vectorization 5.7 Effective Use of MATLAB Vectorization 5.7.1 Vectorization Is Not Always Faster 5.7.2 Applying Smart Indexing 5.7.


3 Breaking a Problem into Simpler Vectorizable Subproblems 5.7.4 Using Vectorization as Replacement for Iterative Data Updates 5.7.5 Minimizing Temporary Data Allocations 5.7.6 Preprocessing Inputs, Rather Than Postprocessing the Output 5.7.


7 Interdependent Loop Iterations 5.7.8 Reducing Loop Complexity 5.7.9 Reducing Processing Complexity 5.7.10 Nested Loops 5.7.


11 Analyzing Loop Pattern to Extract a Vectorization Rule 5.7.12 Vectorizing Structure Elements 5.7.13 Limitations of Internal Parallelization 5.7.14 Using MATLAB¿s Character/Number Duality 5.7.


15 Acklam¿s Vectorization Guide and Toolbox 5.7.16 Using Linear Algebra to Avoid Looping Over Matrix Indexes 5.7.17 Intersection of Curves: Reader Exercise 6. Explicit Parallelization Using MathWorks Toolboxes 6.1 The Parallel Computing Toolbox ¿ CPUs 6.1.


1 Using parfor-Loops 6.1.2 Using spmd 6.1.3 Distributed and Codistributed Arrays 6.1.4 Interactive Parallel Development with pmode 6.1.


5 Profiling Parallel Blocks 6.1.6 Running Example: Using parfor Loops 6.1.7 Running Example: Using spmd 6.2 The Parallel Computing Toolbox ¿ GPUs 6.2.1 Introduction to General-Purpose GPU Computing 6.


2.2 Parallel Computing with GPU Arrays 6.2.3 Running Example: Using GPU Arrays 6.2.4 Running Example: Using Mul.


To be able to view the table of contents for this publication then please subscribe by clicking the button below...
To be able to view the full description for this publication then please subscribe by clicking the button below...