When is dependency injection bad




















I'm assuming the author doesn't use unit tests either given that TDD is "poppycock! It's not surprising that DI doesn't fit that way of building programs, but I don't think that makes DI evil. It's just a tool for a different job. The conclusion is a lot more moderate than the title, the author actually uses dependency injection: While DI has benefits in some circumstances, such as shown in the Copy program, I do not believe in the idea that it automatically provides benefits in all circumstances.

This is why I consider that the application of DI in the wrong circumstances should be considered as being evil and not a universal panacea or silver bullet. In fact, in the wrong circumstances it could even be said that Dependency Injection breaks encapsulation.

Although I have found some places in my framework where I have made use of DI because of the obvious benefits it provides, there are some places where I could use DI but have chosen not to do so.

I think using the word evil is far too dramatic, evil should mean do not ever use. Well at least he didn't title it "Dependency Injection Considered Harmful". BurningFrog on April 12, parent prev next [—]. So all he's saying is that using DI wrong is wrong?

There is a good reason to force people to use dependency injection. At least you're sure that their code will use an interface to describe a dependency which is a huge advantage compared to letting everybody write their own code as they want. The issue with this, in my experience, is that if you don't understand the driving principles behind DI patterns prefer to depend on an interface or set of behaviors, prefer to be given something that satisfies that dependency instead of instantiating it yourself, with the aim of not doing things you aren't responsible for , you can end up with what amounts to either rather messy global state, or with systems that are still very highly coupled but have good paint.

I can't help but see discussions about DI as suffering from sounding fancier than it is and also from less-experienced devs thinking that they must be writing "correct" code because it has the general shape of the DI pattern they read about. If it were my choice, I'd ditch the terminology and best practices all together in favor of active practice critically thinking about the dependencies, responsibilities, and assumptions of code being written or read. KyeRussell on April 12, prev next [—].

I take great exception to that arrogant, condescending attitude. Stuff like this makes me question if this guy has some deeper issues he needs to work out before he can write blog posts about design patterns. The entire thing reeks of clickbait and narcissism. Really thoughtful, clearly articulated, backed with facts and diagrams, whining. Brevity is the soul of wit. Here's a summary: Dependency injection's primary benefit is to aid unit testing. Which makes Dependency Injection all the worth while using but since the Author doesn't Unit Test, the application of DI is obviously lost on him.

So, big rant follows. It all boils down to this. Here's the catch, though: with dependencies, it is really, really hard to know whether dependency implementations change or not. What is more, in most languages, implementing dependency injection is so trivial that it is always worth it. Conversely, the work associated with changing implementations that aren't built with some form of loose coupling, that is, designing around DI, is in most cases non-trivial.

Nope nope. You never know if the implementations of those ideas will change!!!!! It is always possible that anything might need to change. That is how it is. This does not justify calcifying your code by adding extra unnecessary structure, because what that in fact does is make the program harder to change later while requiring you to do more work up front.

Also, as the author of the article notes, it requires one to keep more pieces of information clear in one's head in order to work with code of equivalent complexity, something that is almost always a big lose. In a good language, if a dependency implementation changes, you know this because your program does not compile.

Well, of course because you are not a noob, you are linking things that are versioned in the first place, so this should not ever even be an issue unless you are actively upgrading outside code and are expecting it. When your program does not compile, you want the compile error to be at the site that uses the dependency, because that tells you exactly where the thing is that you need to fix.

Adding excess verbiage around it, and distancing the site that instantiates the dependency from the site that uses it, only causes more work. It seems to me that you think designing for DI is much more complicated than it actually is.

In most modern languages it is trivial to implement. Especially if you have ad-hoc polymorphism available, it comes at virtually no price. Some languages make it more difficult than others, but in most modern languages it's really easy.

Your example of getters and setters is a bit of a red herring. I understand what you mean by it, but it doesn't apply in this case. This is because the getter and setter is an abstraction that derives from encapsulation.

But it is also an antipattern. In many cases, getters and setters are just a type of needless complexity--a distraction, overengineering! On the other hand, DI solves a real, practial problem, and if your language is intelligent, it is really simple to implement.

PaulHoule on April 12, prev next [—]. I dunno. I write a lot of scripty programs that make subjective decisions that, when they work well enough, go into production. In particular I can do experiments that change out any module without having to touch the source code and that is pretty important. I have some sympathy for the idea that DI is harmful to good software design, but this article isn't an argument for it. My specific issue is that DI, and a number of other things, including single-implementation interfaces and mocks in testing, are normally used as a means to an end: testable fragments of code.

Individually testable fragments of code, taken to its logical conclusion, converts every function into a class, possibly implementing an interface, and taking dependencies i. You then end up with an atomized library of classes with names like ThingDoer and methods like doTheThing. All the methods are now testable in isolation, since you can mock all the dependencies, and there's no risk of any pesky static references reaching out and pulling in stuff you can't easily mock.

Splitting everything up so aggressively means that somebody now needs to put all the pieces back together. Some automated help DI, IoC is used. That's where the DI comes in. But extensibility needs to be designed in; pervasive, mandatory abstraction boundaries are unlikely to be good fits for ad-hoc future extension. Slightly chunkier tests - not quite integration, but unit-testing at the library level, the semantics that library clients actually care about - go a long way to reduce this.

But once you go in this direction, the whole reason for the edifice's existence - individually unit-testable atoms of code - is called into question. This is also my problem with mainstream Java code style.

My preferred style is to write support libraries that are individually testable at a slightly higher level, or are functional-style static methods that are generic and wholly testable with simple stubs, and write the main business code such that it uses the libraries in a fashion that's they're close to obviously correct as possible. Isolate any complicated logic into a testable functional static method, or a testable general but not necessarily complete library.

Then integration-test this higher-level business logic. A common problem I see with many junior Java devs is that they write effectively procedural code split into method-per-class classes, and they zip together business logic and more complex implementation logic alongside one another.

Rather than building abstractions that make their business logic simple and free of complex implementation, you end up with a procedural call tree that has a gestalt - the complex implementation - spread across and intermixed with business logic, and all of it tied together via indirected runtime composition, because testing. That's fairly abstract, so I'll make it concrete. Consider a spreadsheet report generator over data coming from entities in a database.

Using a spreadsheet library e. Apache POI is typically quite thorny because it needs to try and support all the features, so you end up with complex logic dealing with each master row, then other methods that have complex logic dealing with each detail row. Code that has detailed knowledge about the business domain is intermixed with code that has detailed knowledge about the spreadsheet library's model. Let's not even talk about tests. An alternative approach - and a refactoring I made - was to create a reporting-oriented write-only facade for the spreadsheet manipulation.

The business logic was then conflated from multiple complex classes to a single simple class that had straightforward code using the spreadsheet writer. I wish we could replace the posted article with your comment. I'm all for articles that challenge the way things are done and that cause us to reflect on why we are doing the things we are doing to make sure we're not cargo-culting - but this article is pretty bad for more reasons than I care to list.

Your comment is great, though, and hopefully will find its way to the top of the thread. People often get caught up in "the one true way to do things" mentality. This is engineering, they think, so there must be a single, indisputable, scientifically provable, ISO standardizable correct way to write software.

Once I know the one true way, I can mechanically apply it and not have to think about it anymore. Someone who subscribes to that mentality might go about creating a lot of ThingDoers with doTheThing methods. If you've made your system less understandable and maintainable than a monolithic procedural class, you've missed the point.

Related functionality needs to remain grouped together for understandability. Abstraction seams need to be created in ways and in places where it makes sense, not mindlessly to every single method in a system.

This is more of an art than a science. To keep that wire-up code close to the feature , see if your container supports encapsulating registrations from the example. For instance, the above module can be registered in Autofac like this:. Autofac for example allows you to register a concrete class under its interfaces like this:.

AsImplementedInterfaces ;. But this hides the names of the interfaces that this class exposes. Instead, I also make this very explicit:.

Even if your IDE tells you that it can infer the registered interface from the registration code, keep it there.

My personal advice is to keep the scope of dependency as local as possible. So if only a single method of a class needs that dependency, pass the dependency directly into that method. A common example of this is when such a method needs to get the current date and time. In C , you can easily do that by de-referencing DateTime. In most cases, I solve that by defining a delegate for this:. Now ;. Consider constructor injection if the entire class needs that dependency. It hides the dependency too much and obscures the logic of when that dependency is needed.

As a general rule of thumb, a class with more than three dependencies should be frowned upon. Scoping the dependency does not only deal with constructors or methods.

I believe it is important to emphasize that a dependency is supposed to have limited usage. One way to make that clear is to follow the convention that an abstraction such as a delegate or interface should only be used by code that lives in the same folder or any of its sub-folders.

Another way is to use nested containers. Autofac allows you to create a nested scope by calling BeginLifetimeScope on the container interface. This returns a disposable ILifetimeScope that you can use to restrict the availability of a dependency for as long as that scope exist. A mature container is a very advanced and optimized piece of code that is far better at tracking references to objects and understanding when to call it Dispose method.

Having covered the good, the bad and the ugly of dependency injection, what remains is to answer the original question of this post. Is an IoC container or dependency injection framework a good thing or not? Well, I think that is a stupid question. Any tool has its merits. Just use it with responsibility. In this the Copy module is hard-coded to call the readKeyboard module and the writePrinter module. While the readKeyboard and writePrinter modules are both reusable in that they can be used by other modules which need to gain access to the keyboard and the printer, the Copy module is not reusable as it is tied to, or dependent upon, those other two modules.

It would not be easy to change the Copy module to read from a different input device or write to a different output device. One method would be to code in a dependency to each new device as it became available, and have some sort of runtime switch which told the Copy module which devices to use, but this would eventually make it bloated and fragile.

The proposed solution is to make the high level Copy module independent of its two low level reader and writer modules. This is done using dependency inversion or dependency injection where the reader and writer modules are instantiated outside the Copy module and then injected into it just before it is told to perform the copy operation.

In this way the Copy module does not know which devices it is dealing with with, nor does it care. Provided that each of the injected objects has the relevant read and write methods then it will work.

This means that new devices can be created and used with the Copy module without requiring any code changes or even any recompilations of that module.

This Copy module shows the differences between having the module configure itself internally or externally:. Here it is clear that the external configuration option using Dependency Injection offers more flexibility as it allows the Copy module to be used with any number of devices without knowing the details of any of those devices.

Notice also that the copy method in the copy object has arguments for the both the source input and target output objects which means that the objects are both injected and consumed in a single operation. There is no need to perform the inject and consume operations separately, so the idea that the Robert C Martin's article promotes such a thing is completely wrong.

This is a prime example of where a relatively simple idea has been corrupted beyond recognition and made much more complicated. The idea that I must manage every one of my dependencies using some complicated Dependency Injection DI or Inversion of Control IoC mechanism does not fly in my universe as the costs outweigh the benefits. It is not good enough to say "Here is a design pattern, now use it" without identifying under what circumstances it is designed to provide some sort of benefit.

If you don't have the problem which this design pattern is meant to solve then implementing the solution may be a complete waste of time. In fact, as well as being a total non-solution it may actually create a whole new series of problems, and that strikes me as being just plain stupid.

But what do I know, I'm just a heretic! In When to use Dependency Injection? In my framework the situations described in points 1, 2, 4 and 5 simply do not exist, which leaves only point 3. In this case where I have the same method appearing in multiple objects due to polymorphism , this means that the object which contains that method can be one of several alternatives. In this case the choice of which alternative to use is made outside of the calling object and the name of the dependent object is injected into it.

This leads me to the following definition which I believe is easier to understand and more precise, therefore less easy to misunderstand and get wrong:.

So, the more polymorphism you have the more opportunities you have for dependency injection. In my framework each one of my database tables has its own table class which inherits from the same abstract table class. My framework also has 40 reusable Page Controllers which communicate with their Model classes using methods defined in the abstract table class.

This means that any of my 40 Controllers can communicate with any of my Models. As well as identifying when a particular design pattern may be a good idea it is also necessary to identify under what circumstances its use may be totally inappropriate, or its benefits either minimal or non-existent.

In his article When does Dependency Injection become an anti-pattern? David Lundgren writes the following:. This wikipedia article contains the following caveat regarding DI:. In his article How to write testable code the author explains the difference between entity objects which are stateful and service objects which are without state. He goes on to say that entities can be injected into services but should never be injected into other entities. So there you have it.

If you build into your application a mechanism for changing dependencies, but you never actually use it, then all that effort is wasted.

On top of that, by inserting code which you never use you may make your application more difficult to maintain. After searching through various articles on the interweb thingy I discovered that the benefits of DI are, in fact, extremely limited. In Jacob Proffitt's article I found the following:. DI does not provide any benefits outside of a testing framework as its only function is to provide the ability to switch easily between a real object and a mock object.

But what is a mock object? This wikipedia article provides the following description:. I first came across the idea of using mock objects many years ago when I read articles saying that before implementing a design using Object Oriented Programming OOP it must first be designed using the principles of Object Oriented Design OOD in which the software objects are designed and built before the physical database is constructed.

This then requires the use of mock objects to access the non-existent database, and it is only after the software components have been finalised that you can then build the physical database and replace the mock objects with real objects. What Poppycock! As I am a heretic I do the complete opposite - I start with a database design which has been properly normalised, build a class for each database table, then build user transactions which perform operations on those table classes.

I have made this process extremely easy by building a Data Dictionary application which will generate both the table classes and the user transactions for me. Notice that I have completely bypassed the need for any OOD. I do not use mock objects when building my application, and I do not see the sense in using mock objects when testing. If I am going to deliver a real object to my customer then I want to test that real object and not a reasonable facsimile.

This is because it would be so easy to put code in the mock object that passes a particular test, but when the same conditions are encountered in the real object in the customer's application the results are something else entirely. You should be testing the code that you will be delivering to your customers, not the code which exists only in the test suite.

If you really need to use mock objects in your testing then using DI to switch objects at runtime is not the only solution. An alternative to DI is the service locator pattern or possibly the service stub. So if Martin Fowler says that it is possible to use a service locator instead of DI in unit testing, then who are you to argue otherwise? Before I describe how and where I use, or not use, DI in my application it would be helpful if I identified the application's structure.

Firstly let me state that I do not write public-facing web sites, or component libraries, I write business-facing enterprise applications where the data and the functions that can be performed on that data takes precedence over a sexy user interface.

In the enterprise it is the data which is the most important asset as it records its business activities and helps identify where it is making its profits or losses. The software which manipulates that data comes a close second as it is easier to replace than the data itself.

It was while using UNIFACE that I became aware of the 3-Tier Architecture , so when I rebuilt my framework in PHP I deliberately chose to implement this architecture as its potential for developing reusable components was much greater than with my previous frameworks.

In his article How to write testable code the author identifies three distinct categories of object:. This distinction between Entities and Services is also discussed in When to inject: the distinction between newables and injectables.

If you think about the objects being used in Robert C Martin's Copy Program you should see that the devices are entities while the copy program is a service. He is clearly showing a situation in which he is injecting entities into a service. One may draw from this the following conclusions:. There are no value objects as in PHP all values are held as simple variables.

In my own framework I never inject an entity into an entity, I only ever inject an entity into a service:. This means that they are not tied to any particular domain and can therefore be used with any domain. All the components in the above diagram are clickable links. Note that in the above example the View component is only capable of generating HTML output, but my framework actually contains other components which can output the data in alternative formats such as PDF or CSV.

The following components are generated by the Data Dictionary in the framework when building an application:. This structure means that all application logic is confined to the Model classes in the Business layer, while the Controllers, Views and Data Access objects are completely application-agnostic and can be shared by any number of different applications.

I then export the details of each database table to produce a table class file and a table structure file. I can then amend the table class file to include any extra validation rules or business logic. If the table's structure changes all I have to do is regenerate the table structure file. When I build a user transaction, which does something to a database table, I go into the Data Dictionary , choose the table, the choose the Transaction Pattern , and when I press the button the relevant scripts are automatically created.

The transaction is automatically added to the menu database so that it can be run immediately. Note that the elapsed time between creating a database table and running the transactions which maintain that table is under 5 minutes, and all without writing a single line of code - no PHP, no SQL, no HTML.

With the RADICORE framework an end-user application is made up of a number of subsystems or modules where each subsystem deals with its own database table. The framework itself contains the following subsystems:. Each of my 2, user transactions has its own component script which looks something like the following:.

Note that this example uses the 'update1' controller, although there are over 40 different controllers, one for each of my Transaction Patterns. The method names used within each Controller are common to every Model class due to the fact that they are all inherited from the abstract table class which is why they can operate on any Model class.

What it does do is provide screen structure information which is used by the View object to identify the following:.

The View object which generates PDF output has its own controllers and uses a report structure file. Those of you who are still awake at this point may notice some things in the above code which are likely to make the "paradigm police" go purple with rage. There may be other rules that I have violated, but I really don't care.

As far as I am concerned those are nothing but implementation details, and provided that what I have achieved is in the spirit of DI then the implementation details should be irrelevant and inconsequential.

The spirit of DI is that the consumer does not contain any hard-coded names of its dependents so that it can be passed those names at run-time. DI nowadays suffers a lot from Cargo Cult Programming. Show 8 more comments. Active Oldest Votes. Dependency injection at its simplest and most fundamental level is simply: A parent object provides all the dependencies required to the child object. Due to more than 2 people having confusion over the term parent and child, in the context of dependency injection: The parent is the object that instantiates and configures the child object it uses The child is the component that is designed to be passively instantiated.

Dependency injection is a pattern for object composition. Why interfaces? Why frameworks? Frameworks provide the following benefits: Autowiring dependencies to components Configuring the components with settings of some sort Automating the boiler plate code so you don't have to see it written in multiple locations. They also have the following disadvantages: The parent object is a "container", and not anything in your code It makes testing more complicated if you can't provide the dependencies directly in your test code It can slow down initialization as it resolves all the dependencies using reflection and many other tricks Runtime debugging can be more difficult, particularly if the container injects a proxy between the interface and the actual component that implements the interface aspect oriented programming built in to Spring comes to mind.

The container is a black box, and they aren't always built with any concept of facilitating the debugging process. What about [random article on the Internet]? In short, think for yourself and try things out. Working with "old heads" Learn as much as you can. Improve this answer. Berin Loritsch Berin Loritsch CarlLeth, I've worked with a number of frameworks from Spring to.

The only way to test components built like that is to use the container. Spring does have JUnit runners to configure the test environment, but it is more complicated than setting things up yourself. So yes, I just gave a practical example. In the worst case, I have to run the code to see how dependencies are initialized and passed in.

You mention this in the context of "testing", but it's actually much worse if you're just starting off looking at the source, never mind trying to get it to run which may involve a ton of setup. Impeding my ability to tell what code does by just glancing at it is a Bad Thing. Interfaces are not contracts, they are simply API's. Contracts imply semantics. BerinLoritsch The main point of your own answer is that the DI principle!

The fact that Spring can do awful, unforgivable things is a disadvantage of Spring, not of DI frameworks in general. A good DI framework helps you follow the DI principle without nasty tricks. CarlLeth: all DI frameworks are designed to remove or automate some things the programmer does not wish to spell out, they just vary in how.

This is a trade off you make even if your DI framework is "perfect". Show 9 more comments. JacquesB JacquesB I couldn't agree more! Also note that mocking things for the sake of it means that we're not actually testing the real implementations.

If A uses B in production, but has only been tested against MockB , our tests don't tell us if it will work in production. When pure no side-effect components of a domain model are injecting and mocking each other, the result is a huge waste of everyone's time, a bloated and fragile codebase, and low confidence in the resulting system. Mock at the system's boundaries, not between arbitrary pieces of the same system.

CarlLeth Why do you think DI makes code "testable and maintainable", and code without it less? If code has side-effects we have to care. DI can pull side-effects out of functions and put it in parameters, making those functions more testable but the program more complex.

Sometimes that's unavoidable e. DB access. If code has no side effects, DI is just useless complexity. CarlLeth: DI is one solution to the problem of making code testable in isolation if it has dependencies which forbid it. But it does not reduce overall complexity, nor does it make code more readable, which means it does not necessarily increas maintainability.

However, if all of those dependencies can be eliminated by better separation of concerns, this perfectly "nullifies" the benefits of DI, because it nullfies the need for DI. This is often a better solution to making code more testable and maintainable at the same time. Warbo This was the original and still probably the only valid use of mocking.

Even at system boundaries, it is rarely needed. People really do waste much time creating and updating nearly worthless tests. CarlLeth: ok, now I see where the misunderstanding comes from. You are talking about dependency inversion.

Show 14 more comments. In my experience, there are a number of downsides to dependency injection. Eric Eric 4 4 silver badges 5 5 bronze badges. I would also add that the more you inject, the longer your startup times are going to become.

Most DI frameworks create all injectable singleton instances at startup time, regardless of where they are used. If you would like to test a class with real implementation not a mock , you can write functional tests - tests similar to unit tests, but not using mocks. Barbati I'd question "Most". Of the ones I've used, only Spring does this by default it does it to try to flush injection errors out immediately , and you can opt out of it.

HK2, Guava and Dagger all create instances at injection time, not application startup. What increases complexity is the use of unnecessary indirection and DI frameworks. Other than that, this answer hits the nail on the head. Dependency injection, outside of cases where it is truly needed, is also a warning sign that other superfluous code may be present in great abundance.

Executives are often surprised to learn that developers add complexity for the sake of complexity. Show 4 more comments. I followed Mark Seemann's advice from "Dependency injection in. NET" - to surmise. Dennis 7, 4 4 gold badges 32 32 silver badges 64 64 bronze badges.

Rob Rob 1 1 silver badge 3 3 bronze badges. Note he also gives different advice for OO and functional languages blog. That's good point. Can you provide a reference to "Dependency injection in.

If you are unit testing, then it is highly likely that your dependency is volatile.



0コメント

  • 1000 / 1000