Modularity is the most successful software engineering practice ever. Unlike other practices, it is practically never abused.
One day I was asked when is it bad idea to modularize software. Of course, for every good thing there are always pathological or contrived circumstances, in which it turns out to be a bad idea. Software modularity is no exception to this.
To be able to present and understand complicated algorithms, they need to be modularized. Then, when wishing to optimize such an algorithm, one typically confines himself to local optimizations, rather than to global optimizations.
When global optimizations are needed, the algorithm developer has to forsake modularity, and the resulting algorithm becomes very big and difficult to comprehend.
Source code level
Due to the way the human brain works, modularity is always good in source code level. However, it needs language support, such as support for macros and in-line functions to allow compilation into efficient machine code.
On the other hand, one can expect the software development environment to have source code pre-processing tools, which work around any language support deficiencies. Nowadays, it is not a big deal, unless one works for a mentally retarded software development operation.
Machine code level
In machine code level, software modularity means usage of DLLs, inter-module interfaces, plug-ins, etc.
This kind of modularity can be bad, if a module interface overhead directly affects a system bottleneck. A system bottleneck could be CPU time, memory consumption, I/O, database accesses, network latency/throughput, etc.
A good system design implements machine language level modularity when the overhead is not critical to performance; and optimizes the interfaces away where system bottlenecks occur.