I have recently been involved in discussions in regards to the losses in induction motors and it has been suggested that "Theory and Practice don't always match".
My understanding is this:
The losses in an induction motor operating from a constant supply voltage and frequency are:
- Iron loss
- Copper loss
- frictional loss
- windage loss
- other minor losses
Of these, copper loss is proportional to the current squared, frictional and windage losses are speed dependant, and iron loss is dependant on the flux density in the iron.
This would suggest that copper loss only is dependent on load, decreasing with decreasing load.
Iron loss reduces with reducing voltage and increases dramatically if the voltage is elevated to the point of saturation.
The major losses in the motor are the copper and iron losses which are both in the same order of magnitude, plus or minus.
From the above, one would expect that the total losses in the motor would reduce as the load is reduced and be lowest at minimum load. This certainly is in accordance with my experience.
I recently had a case put to me of a motor where the off load losses were three times the full load losses. This was reported to have been verified by a certified laboratory and really destroys the classical equivilent circuit theory above.
Can anybody shed any light on why or how this can be so and under what circumstances it can be repeated??