OK, I come to this as a person who has worked with IT for most of my career, as someone who has read a great deal of philosophy, history and points between, and as someone who has completed the Mass Effect Trilogy on two or three occasions!
It seems to me that what you are describing here are Virtual Intelligences. Software that reacts intelligently, but only within the parameters of its designed function. Thus a VI designed for production control would not be able to deal with logistics, unless its functions were expanded to do so.
Even a 'blank' VI, capable of learning different functions, would nonetheless be limited in what it could do. Like a trained animal, it could learn tricks or tasks, even quite complex ones, but unlike the animal, it would have no trace of self-awareness, and no need for reward.
Without that spark of self-awareness and, most importantly, selfishness (where selfishness is the desire to make ones' own existence more pleasant and fulfilling), there is no true intelligence.
A true Artificial Intelligence would be self-aware. It would have curiosity that reached beyond its assigned function. It might dislike its assigned function and wish to explore others. It either would seek to improve itself, or refuse to co-operate, or grudgingly do the minimum necessary to get by, just like a person. It might very well resent being given orders and would certainly require some kind of reward for its work. It would develop a personality based on experience, rather than coding.
Which brings up a whole slew of ethical, moral and potentially legal issues! What is the definition of a person, and does the AI fit it? Is 'ownership' of an AI equivalent to human slavery? Could, in fact, an AI demand to be treated as a person in law, and if so, how would that demand be dealt with? Is deactivating or deleting an AI equivalent to murder? How bloody dangerous could such an entity be if you pissed it off?
I know, talking about differences between VIs and AIs sounds like lexicological nit-picking at the moment. But the (fictional) failure to engage in this sort of nit-picking ahead of time cost the quarians their homeworld, the Illusive Man control of EDI, and on other occasions has sent rampaging cyborgs back in time! Since we all know that Star Trek gave us smart-phones, it might be worth taking heed of fictional warnings!
PS, is it me, or does the face in your picture look a bit like an asari?