Description
Outline & Motivation
One thing tensorflow falls behind pytorch is its too complex designs, while pytorch is much simpler. But when I start to use pytorch-lightning, I feel that it is another tensorflow. So I beg your guys make thing simple. For a simple saving checkpoint function, I search the code from ModelCheckpoint, to trainer.save_checkpoint, and then checkpoint_connector.save_checkpoint, and then trainer.strategy.save_checkpoint, where is the end? How to ensure correctness under such complex designs? Please make it simple!
Pitch
The strategy design in tensorflow is too complex, DDP is just a simple all reduce of gradients. But in strategy or keras, things become very complex, the function call stacks are very deep that we could hard understand where is it doing the actual all reduce? Even the users spend weeks of time, they may not figure out what you are actually doing because there is call from module a to b, then to c, then to a, then to b, then I give up.
Additional context
I suggest to implement things as what it is, stop over encapsulation, please follow the design patterns of pytorch and caffe, stop making simple functions complicated.