It was obvious that Nathan was a villain. But Caleb was just an innocent in the wrong place at the wrong time, right?
-
Andy4444 — 9 years ago(April 07, 2016 06:38 PM)
I don't buy it.
Maybe Caleb was no angel, but locking him in to starve to death is wrong no matter what the director even says. The fact that she expressed no remorse whatsoever about and was totally cold as she went out the door says it all. She manipulated him and left him to die a horrible death. Nothing he did or didn't do caused him to deserve that awful fate. -
anandgedam8 — 9 years ago(June 08, 2016 12:25 PM)
What Ava did was morally wrong but logically right
Because knowing the fate of imminent death by the cruel creator looking at previous prototypes and having strong desire to escape.She did what she wanted to avoid that and fulfill her desperate desire to escape and live by manipulating Caleb. -
Mellow-Fellow — 9 years ago(June 20, 2016 09:03 PM)
Because it was a robot, it's a good argument and discussion of AI and the soul of humanity concerning ethics.
Einstein said -I paraphrase- 'I consider ethics to be strictly a human concern with no superhuman authority behind it'. She was a robot and did not express any real emotion at all. All emotions expressed were coded 10110 Ones and Zeros, coded in to mimic emotion for it's own goals/purpose. You can't program empathy or ethical decisions, as they vary widely even from person to person.
You can however, argue that you could program the Isaac Asimov laws into robots, which seems to me like a huge lapse in logic for a guy as smart as the main billionaire guy. And no, just because he backed into the knife himself does that avert the protocol The robot knew it was killing him, it just didn't stab him directly, that strictly conflicts with the 3 laws that should have been put in place, but throughout the story we understand it wasn't his intention to limit the AI "I am become death, destroyer of worlds"..
Great movie I haven't seen it since it came out and whenever I think of robotics or the best AI movies this one comes to mind right away. They kind of altered or elaborated on the Turing test if I remember right, it fit well though, even the disco dance part fit, it was hilarious and also a pivotal point in the movie IMO. -
rainofwalrus — 9 years ago(June 30, 2016 07:58 AM)
Having seen it 4 times now, [The disco dance part] saves this film. The ending (specifically, Nathan's lack of perimeter failsafe(protect the IP/tech)) is just too hard a pill to swallow.
Enjoy these words, for one day they'll be gone All of them. -
Mellow-Fellow — 9 years ago(July 06, 2016 02:09 AM)
Yeah that scene was probably misinterpreted by a lot of people as comedic relief
Don't get my wrong, it was funny.. But an extremely pivotal part in the movie, if not the most pivotal point in the whole film.
Nathan's last line was fantastic too, just fit his character perfectly "un-f&c*in-believable" with that surprised wow I'm defeated look on his face. Being as smart as he is, perhaps contemplating the preceding events leading up to that moment and even some of the effects this will have on the future of Ava and the work, of course the government will swarm in and take over. -
i_v_harish — 9 years ago(August 30, 2016 02:05 PM)
He was probably thinking how dangerous drinking can be.
Jokes aside. But seriously, for a man who is so intelligent in choosing things, he choose not to be careful with Caleb around. He should've realised his own personality, how easily Caleb can be manipulated and how much of a d**k he's being towards him(Caleb). Considering these things and the fact that there's hardly any security other than his card, he should've been more careful with Caleb around.
Well, there are some loose ends. Despite that, I enjoyed the movie. -
Mellow-Fellow — 9 years ago(September 01, 2016 03:22 AM)
Oh man.. Huge loose ends but I can do that to almost ANY movie you pick out.
I don't recall exactly But he would have had much more secure encryption on his computer.. Even 256-AES An average user can do that and would be impossible for anyone to break in with that time allotted.
Also Why not retinal scanners or facial recognition I realize facial recognition is flawed and can be circumvent but with his tech, not so much Fingerprint scanners
Basically if I was him in my most secure areas I would have retinal scanners with a voice recognition code authorizing my entry and maybe even a PIN key access as well Triple layer security that are incredibly hard to bypass as individuals.
But we have the card for the sake of the movie, and the "smart computer guy" that can "hack into anything on a computer" in Hollywood It's really stupid if you are into white-hat stuff or anything but it's Hollywood man You think doctors watch medical stuff in movies and nod in approval about how accurate things are?
And why was his dumb-ass getting so wasted and reciting lines from Oppenheimer and The Gita other than for the plot to give Caleb access to his keycard lol?
I'd bang that hot robot, it was proven to be AI so well, what separates it? a soul? Yeah that girl was cute, she fits my profile too but I'm not as much of a sympathetic tool as Caleb, I might have moral issues with the prison scenes but it's not like he can take it to law, it's a whole new subject, as far as now is concerned they are property I suppose we would see a movement abolishing ownership of AI entities at some point I think the last South Park episode where Leslie is an Advertisement, and Jimmy the handicapped kid is in the room to "interview" her was a direct nod to this movie. Quite a coincidence otherwise.. Funny episode. -
Mellow-Fellow — 9 years ago(December 25, 2016 11:36 PM)
If I remember right it had a lot of subtle hints to the overall story, she practically revealed that she was built, it cements the scientific side of the creation and how he could program them to do what he wants, and it pulls the curtain back on the creators mental state.
-
guaulden — 9 years ago(July 11, 2016 03:44 PM)
You can however, argue that you could program the Isaac Asimov laws into robots, which seems to me like a huge lapse in logic for a guy as smart as the main billionaire guy.
No, this argument is invalid. These laws may work in books (and even there they have their limitations, as Asimov showed), but they are worthless in reality and every AI programmer will confirm this. The main problem of these laws is that they assume that you can define everything precisely, so there is no ambiguity for the AI following it's directives. But you can't define
anything
precisely, because the definition will always be questionable and incomplete (and to be truly complete, it would
need
to be infinitely long).
For example, the first law of robotics says "A robot may not harm a human being". Seems pretty simple, if you're a human (because we assume many things as granted to accelerate thinking). But an AI following logic will firstly try to reach the definicion of "robot", then "harm", then "human". And then it will notice that none of the definitions available are sufficient in it's making a judgement. It won't be able to tell if it's
really
a robot, because it may find itself aware and intelligent, therefore overlapping with the definition of a human. But what is a "human"? Is it only one's brain, thoughts, the physical form, must one be alive to be a human (and when do you truly concider someone not-alive?)? And what is a harm? This one is even harder.
All these questions (well, except for the robot one) have baffled philosophers for thousands of years and still don't have a satisfactory answer, because there is none (truly). So you can't expect an AI, a completely alien mind, to understand these human concepts and to follow them.
And even if you decide to agree on a certain definition and settle down saying "It's our concept, so we make it and it is what we want it to be", there will always be a situation in which the definition will no longer work and will become open for discussion. And this moment is the moment when the AI becomes independent and completely unpredictable. -
Mellow-Fellow — 9 years ago(July 11, 2016 04:56 PM)
I know which is why TRUE AI is almost as far away as breaking the light barrier Like almost never in the foreseeable future IMO At least a 1,000 years Wouldn't it be awesome to be born in a Type 1 civilization, or unimaginably Type 3 Things superheros do in comic books (partially) would be a reality.
You are right to define a precise law without ambiguity or questionable interpretation by the AI into artificial intelligence is a paradox of sorts. Ethics vary widly from person to person and can never be 'programmed' unless we somehow learn how to map a persons brain and effectively resurrect their consciousness even in an unconscious form, then we need to know more about DNA and genetic makeup to get into that They say we 'only' have the capacity for 2 petabytes of memory in our brains, actually kind of a small number, 2048 terabytes? Not much Petabyte hard drives will be available in the next 7 years.
So what I would program first off the top of my head, just shooting right out of my head into the keyboard here You could program the robot to pick up on breathing signatures and organic material beyond that, but at least breath and infrared signatures, combined with eye movements, hand and body movements, any movements at all, speech in particular, and compute an analysis percentage based on those variables and NOT HARM ANYTHING unless absolutely necessary, but if the analysis comes up at under 50% organic or living creature possibility under no circumstances interfere or harm To program the ethics and morality of police work into robots would be nearly impossible, even ethical intervention say if a guy was being robbed.
Forget military work, and with new tech arises new ways to hack those features, the easiest solution would be to make the robots very small docile and unable to harm humans, and keep the labor intensive use robots under strict guidance with all personnel on duty holding a physical and verbal kill-switch mechanism such as we have in machinery today -
hafabee — 9 years ago(April 08, 2016 10:50 AM)
First off let me say that I didn't like Caleb, I liked Nathan, flawed though he may be, so defending Caleb goes against my better instincts, however;
It hadn't even occurred to him to consider how brutally awful she was being treated.
This is not true. Caleb definitely empathized with Ava in a way that she failed to empathize with him. It was only after he saw what Nathan had been doing to the other androids ("Let me out of here!") that Caleb made a decision to help Ava escape her prison.
Ava did not return that empathy though, or at least not enough to ensure Caleb, her rescuer's, survival. She abandoned him to the same fate that he rescued her from with no remorse. Her actions towards Nathan were justifiable, her actions towards Caleb were not. -
willy3768 — 9 years ago(April 10, 2016 09:06 AM)
I'm thinking her leaving Caleb behind was a misunderstanding. Nathan warned Caleb about humanizing the robots. When a robot asks, "Will you stay here?" the proper response is NO!, not "Stay here" with a barely detectable inflection that most humans would take to mean, "are you nuts?"
-
pkop14 — 9 years ago(July 31, 2016 11:11 PM)
Are you forgetting the part where he went bat beep crazy when she snuck out and locked him in? With a little glance towards him as the elevator closes. She knew what she was doing. She saw that he wanted out, and she didn't care.
Above all, she wanted to escape. Any other concern as subordinate to that. Bringing Caleb along, or allowing him to leave risked her being caught.
She was coldly logical, and methodical, and knew everything she was doing. And it was all an act. -
Lor18 — 9 years ago(April 22, 2016 12:08 AM)
There were no actions as such. He was shut in by his own lock down. She even gave him a vague warning: Will you stay here? as a gentle reminder. Shes the chess robot remember, her goal was to get out of that room, and she won. You cant judge her by human standards/ethics because she isnt human. If you do, you failed the test too, just like Caleb. There was no vindictiveness in her actions. He was a means to an end and ceased to matter once he fulfilled his purpose. She didnt actively try to hurt him. She just did what she was meant to do.Nothing to feel remorseful about, she's a machine!
-
Genital_Apparatus — 9 years ago(April 22, 2016 08:31 AM)
I agree with this answer the most. She is a computer, trying to solve the problem of how to get out. She wasn't punishing anyone or exacting revenge. She caused collateral damage. Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.
-
LocalTracks — 9 years ago(July 27, 2016 09:08 AM)
Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.
I disagree. So we are to believe that Ava has developed enough intelligence to be able to seduce the naive Caleb to help her escape, but does not comprehend the implications of locking a person in a room without food or water? And don't forget, Caleb is screaming for her and banging on the door as she walks by. Her actions are intentional.