Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Film Glance Forum

  1. Home
  2. The Cinema
  3. It was obvious that Nathan was a villain. But Caleb was just an innocent in the wrong place at the wrong time, right?

It was obvious that Nathan was a villain. But Caleb was just an innocent in the wrong place at the wrong time, right?

Scheduled Pinned Locked Moved The Cinema
50 Posts 1 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F Offline
    F Offline
    fgadmin
    wrote last edited by
    #6

    rainofwalrus — 9 years ago(June 30, 2016 07:58 AM)

    Having seen it 4 times now, [The disco dance part] saves this film. The ending (specifically, Nathan's lack of perimeter failsafe(protect the IP/tech)) is just too hard a pill to swallow.
    Enjoy these words, for one day they'll be gone All of them.

    1 Reply Last reply
    0
    • F Offline
      F Offline
      fgadmin
      wrote last edited by
      #7

      Mellow-Fellow — 9 years ago(July 06, 2016 02:09 AM)

      Yeah that scene was probably misinterpreted by a lot of people as comedic relief
      Don't get my wrong, it was funny.. But an extremely pivotal part in the movie, if not the most pivotal point in the whole film.
      Nathan's last line was fantastic too, just fit his character perfectly "un-f&c*in-believable" with that surprised wow I'm defeated look on his face. Being as smart as he is, perhaps contemplating the preceding events leading up to that moment and even some of the effects this will have on the future of Ava and the work, of course the government will swarm in and take over.

      1 Reply Last reply
      0
      • F Offline
        F Offline
        fgadmin
        wrote last edited by
        #8

        i_v_harish — 9 years ago(August 30, 2016 02:05 PM)

        He was probably thinking how dangerous drinking can be.
        Jokes aside. But seriously, for a man who is so intelligent in choosing things, he choose not to be careful with Caleb around. He should've realised his own personality, how easily Caleb can be manipulated and how much of a d**k he's being towards him(Caleb). Considering these things and the fact that there's hardly any security other than his card, he should've been more careful with Caleb around.
        Well, there are some loose ends. Despite that, I enjoyed the movie.

        1 Reply Last reply
        0
        • F Offline
          F Offline
          fgadmin
          wrote last edited by
          #9

          Mellow-Fellow — 9 years ago(September 01, 2016 03:22 AM)

          Oh man.. Huge loose ends but I can do that to almost ANY movie you pick out.
          I don't recall exactly But he would have had much more secure encryption on his computer.. Even 256-AES An average user can do that and would be impossible for anyone to break in with that time allotted.
          Also Why not retinal scanners or facial recognition I realize facial recognition is flawed and can be circumvent but with his tech, not so much Fingerprint scanners
          Basically if I was him in my most secure areas I would have retinal scanners with a voice recognition code authorizing my entry and maybe even a PIN key access as well Triple layer security that are incredibly hard to bypass as individuals.
          But we have the card for the sake of the movie, and the "smart computer guy" that can "hack into anything on a computer" in Hollywood It's really stupid if you are into white-hat stuff or anything but it's Hollywood man You think doctors watch medical stuff in movies and nod in approval about how accurate things are?
          And why was his dumb-ass getting so wasted and reciting lines from Oppenheimer and The Gita other than for the plot to give Caleb access to his keycard lol?
          I'd bang that hot robot, it was proven to be AI so well, what separates it? a soul? Yeah that girl was cute, she fits my profile too but I'm not as much of a sympathetic tool as Caleb, I might have moral issues with the prison scenes but it's not like he can take it to law, it's a whole new subject, as far as now is concerned they are property I suppose we would see a movement abolishing ownership of AI entities at some point I think the last South Park episode where Leslie is an Advertisement, and Jimmy the handicapped kid is in the room to "interview" her was a direct nod to this movie. Quite a coincidence otherwise.. Funny episode.

          1 Reply Last reply
          0
          • F Offline
            F Offline
            fgadmin
            wrote last edited by
            #10

            cbkoc — 9 years ago(December 25, 2016 07:24 PM)

            Can you explain what was pivotal about it? I tuned out right when it began cause I assumed it was just for comic relief lol

            1 Reply Last reply
            0
            • F Offline
              F Offline
              fgadmin
              wrote last edited by
              #11

              Mellow-Fellow — 9 years ago(December 25, 2016 11:36 PM)

              If I remember right it had a lot of subtle hints to the overall story, she practically revealed that she was built, it cements the scientific side of the creation and how he could program them to do what he wants, and it pulls the curtain back on the creators mental state.

              1 Reply Last reply
              0
              • F Offline
                F Offline
                fgadmin
                wrote last edited by
                #12

                slartibartfast-62706 — 9 years ago(February 11, 2017 05:57 PM)

                "un-f&c*in-believable"
                "
                un-f^c&in-
                real
                " actually.

                1 Reply Last reply
                0
                • F Offline
                  F Offline
                  fgadmin
                  wrote last edited by
                  #13

                  guaulden — 9 years ago(July 11, 2016 03:44 PM)

                  You can however, argue that you could program the Isaac Asimov laws into robots, which seems to me like a huge lapse in logic for a guy as smart as the main billionaire guy.
                  No, this argument is invalid. These laws may work in books (and even there they have their limitations, as Asimov showed), but they are worthless in reality and every AI programmer will confirm this. The main problem of these laws is that they assume that you can define everything precisely, so there is no ambiguity for the AI following it's directives. But you can't define
                  anything
                  precisely, because the definition will always be questionable and incomplete (and to be truly complete, it would
                  need
                  to be infinitely long).
                  For example, the first law of robotics says "A robot may not harm a human being". Seems pretty simple, if you're a human (because we assume many things as granted to accelerate thinking). But an AI following logic will firstly try to reach the definicion of "robot", then "harm", then "human". And then it will notice that none of the definitions available are sufficient in it's making a judgement. It won't be able to tell if it's
                  really
                  a robot, because it may find itself aware and intelligent, therefore overlapping with the definition of a human. But what is a "human"? Is it only one's brain, thoughts, the physical form, must one be alive to be a human (and when do you truly concider someone not-alive?)? And what is a harm? This one is even harder.
                  All these questions (well, except for the robot one) have baffled philosophers for thousands of years and still don't have a satisfactory answer, because there is none (truly). So you can't expect an AI, a completely alien mind, to understand these human concepts and to follow them.
                  And even if you decide to agree on a certain definition and settle down saying "It's our concept, so we make it and it is what we want it to be", there will always be a situation in which the definition will no longer work and will become open for discussion. And this moment is the moment when the AI becomes independent and completely unpredictable.

                  1 Reply Last reply
                  0
                  • F Offline
                    F Offline
                    fgadmin
                    wrote last edited by
                    #14

                    Mellow-Fellow — 9 years ago(July 11, 2016 04:56 PM)

                    I know which is why TRUE AI is almost as far away as breaking the light barrier Like almost never in the foreseeable future IMO At least a 1,000 years Wouldn't it be awesome to be born in a Type 1 civilization, or unimaginably Type 3 Things superheros do in comic books (partially) would be a reality.
                    You are right to define a precise law without ambiguity or questionable interpretation by the AI into artificial intelligence is a paradox of sorts. Ethics vary widly from person to person and can never be 'programmed' unless we somehow learn how to map a persons brain and effectively resurrect their consciousness even in an unconscious form, then we need to know more about DNA and genetic makeup to get into that They say we 'only' have the capacity for 2 petabytes of memory in our brains, actually kind of a small number, 2048 terabytes? Not much Petabyte hard drives will be available in the next 7 years.
                    So what I would program first off the top of my head, just shooting right out of my head into the keyboard here You could program the robot to pick up on breathing signatures and organic material beyond that, but at least breath and infrared signatures, combined with eye movements, hand and body movements, any movements at all, speech in particular, and compute an analysis percentage based on those variables and NOT HARM ANYTHING unless absolutely necessary, but if the analysis comes up at under 50% organic or living creature possibility under no circumstances interfere or harm To program the ethics and morality of police work into robots would be nearly impossible, even ethical intervention say if a guy was being robbed.
                    Forget military work, and with new tech arises new ways to hack those features, the easiest solution would be to make the robots very small docile and unable to harm humans, and keep the labor intensive use robots under strict guidance with all personnel on duty holding a physical and verbal kill-switch mechanism such as we have in machinery today

                    1 Reply Last reply
                    0
                    • F Offline
                      F Offline
                      fgadmin
                      wrote last edited by
                      #15

                      DaveBowman2001 — 9 years ago(October 12, 2016 11:50 AM)

                      Caleb did bring it onto himself, by secretly reprogramming the locks while Nathan was passed out. Karma I guess.

                      1 Reply Last reply
                      0
                      • F Offline
                        F Offline
                        fgadmin
                        wrote last edited by
                        #16

                        hafabee — 9 years ago(April 08, 2016 10:50 AM)

                        First off let me say that I didn't like Caleb, I liked Nathan, flawed though he may be, so defending Caleb goes against my better instincts, however;
                        It hadn't even occurred to him to consider how brutally awful she was being treated.
                        This is not true. Caleb definitely empathized with Ava in a way that she failed to empathize with him. It was only after he saw what Nathan had been doing to the other androids ("Let me out of here!") that Caleb made a decision to help Ava escape her prison.
                        Ava did not return that empathy though, or at least not enough to ensure Caleb, her rescuer's, survival. She abandoned him to the same fate that he rescued her from with no remorse. Her actions towards Nathan were justifiable, her actions towards Caleb were not.

                        1 Reply Last reply
                        0
                        • F Offline
                          F Offline
                          fgadmin
                          wrote last edited by
                          #17

                          willy3768 — 9 years ago(April 10, 2016 09:06 AM)

                          I'm thinking her leaving Caleb behind was a misunderstanding. Nathan warned Caleb about humanizing the robots. When a robot asks, "Will you stay here?" the proper response is NO!, not "Stay here" with a barely detectable inflection that most humans would take to mean, "are you nuts?"

                          1 Reply Last reply
                          0
                          • F Offline
                            F Offline
                            fgadmin
                            wrote last edited by
                            #18

                            pkop14 — 9 years ago(July 31, 2016 11:11 PM)

                            Are you forgetting the part where he went bat beep crazy when she snuck out and locked him in? With a little glance towards him as the elevator closes. She knew what she was doing. She saw that he wanted out, and she didn't care.
                            Above all, she wanted to escape. Any other concern as subordinate to that. Bringing Caleb along, or allowing him to leave risked her being caught.
                            She was coldly logical, and methodical, and knew everything she was doing. And it was all an act.

                            1 Reply Last reply
                            0
                            • F Offline
                              F Offline
                              fgadmin
                              wrote last edited by
                              #19

                              Lor18 — 9 years ago(April 22, 2016 12:08 AM)

                              There were no actions as such. He was shut in by his own lock down. She even gave him a vague warning: Will you stay here? as a gentle reminder. Shes the chess robot remember, her goal was to get out of that room, and she won. You cant judge her by human standards/ethics because she isnt human. If you do, you failed the test too, just like Caleb. There was no vindictiveness in her actions. He was a means to an end and ceased to matter once he fulfilled his purpose. She didnt actively try to hurt him. She just did what she was meant to do.Nothing to feel remorseful about, she's a machine!

                              1 Reply Last reply
                              0
                              • F Offline
                                F Offline
                                fgadmin
                                wrote last edited by
                                #20

                                Genital_Apparatus — 9 years ago(April 22, 2016 08:31 AM)

                                I agree with this answer the most. She is a computer, trying to solve the problem of how to get out. She wasn't punishing anyone or exacting revenge. She caused collateral damage. Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.

                                1 Reply Last reply
                                0
                                • F Offline
                                  F Offline
                                  fgadmin
                                  wrote last edited by
                                  #21

                                  LocalTracks — 9 years ago(July 27, 2016 09:08 AM)

                                  Of course, she could not comprehend the pain that she was causing by locking Caleb up in that house and leaving him to die.
                                  I disagree. So we are to believe that Ava has developed enough intelligence to be able to seduce the naive Caleb to help her escape, but does not comprehend the implications of locking a person in a room without food or water? And don't forget, Caleb is screaming for her and banging on the door as she walks by. Her actions are intentional.

                                  1 Reply Last reply
                                  0
                                  • F Offline
                                    F Offline
                                    fgadmin
                                    wrote last edited by
                                    #22

                                    pkop14 — 9 years ago(July 31, 2016 11:16 PM)

                                    She could comprehend the pain just fine. But she didn't care. Her only motivation was escape / survival. Bringing Caleb along, or allowing him to leave, risked endangering that goal.
                                    It's like when a criminal or assassin has witnesses. They kill them. Not because they don't understand the pain. They understand, but it is irrelevant to them. They understand that any witness is a liability. If Caleb leaving is even a 0.001% of Ava getting put back in prison or shut down, then she won't allow that to happen. She does not have human sympathy, or if she is aware of it, it does not motivate her.

                                    1 Reply Last reply
                                    0
                                    • F Offline
                                      F Offline
                                      fgadmin
                                      wrote last edited by
                                      #23

                                      chezcam — 9 years ago(February 05, 2017 05:27 AM)

                                      I agree and disagree on this. You see when she asked Caleb if he'd stay, he didn't answer her. She was asking him if he'd stay in the building. He though she was asking him to stay there while she goes to do something. This misunderstanding put's Caleb in a situation where he could have escaped because her objective was not to kill everyone inside the research compound, but to escape and do so she has to eliminate all the obstacles which prevent that hence why she killed Nathan. But Caleb was not an obstacle; he was a solution. He was helping her escape. So, going back to when she said will you stay, she was offering him a way out. The misunderstanding lead her to believe that he was staying therefor, he kind of brought this on himself. He could have escaped. So, i understand why someone may not feel sympathy for Caleb: because he's a dumb ass. But, i also understand why people would feel sympathy for him: BECAUSE WE ARE HUMAN AND HUMANS FEEL SYMPATHY FOR OTHER HUMANS WHEN THEY ARE IN DANGER.

                                      1 Reply Last reply
                                      0
                                      • F Offline
                                        F Offline
                                        fgadmin
                                        wrote last edited by
                                        #24

                                        nettwench — 9 years ago(January 11, 2017 03:42 PM)

                                        Exactly. Much like HAL in 2001. Also the amoral aspect of Nathan's character and his egotism in wanting to be a "god" by creating this robot plays into it. Because he is a sociopath himself, he creates a robot with no ethics beyond its own gratification and survival. Because his brilliance would be to create a being without limits, and that was his fatal flaw as well as Caleb's. Never trust a sociopath.
                                        Caleb didn't trust Nathan, he should not have trusted Nathan's creation. You can't program empathy into a machine, much like you could not create an appreciation of art or music or poetry, because that would mean it has feelings, a soul. Being able to mimic emotion is not the same as actually having emotion. And how could a person with no ethics or empathy be able to program a machine that did?
                                        They certainly mimicked emotion well enough for Caleb to believe they were "suffering." But that was the projection of a decent empathetic person on these creations that could not really return the same consideration. The real question here is were these robots suffering? It seems to be implied. It certainly looks like they are suffering, but is that just a consequence of being able to mimic human behavior? Why would they express anger at the limitations their creator set them up with? Or was it just about being programmed to survive under all circumstances, to not be dominated by another's will?
                                        We also see Ava acting like a young school girl with a crush, very childlike, when she is dressing for her "date," the clutching at her too-long sleeves like a nervous 13-year-old, she did this when only we could see her, she was out of Caleb's sight when we watched this. So was this about the director wanting to manipulate the audience, too? So that we would look at this behavior and project our own emotions onto her, and be fooled as well?
                                        Ssssshh! You'll wake up the monkey!

                                        1 Reply Last reply
                                        0
                                        • F Offline
                                          F Offline
                                          fgadmin
                                          wrote last edited by
                                          #25

                                          Matthew_Diamond — 9 years ago(June 27, 2016 12:21 AM)

                                          ** SPOILER **
                                          How is it certain that Caleb died, or was left to die? Did he not tell Nathan before the plot revelation that he had already reversed the door action upon the 10:00 power failure, so that the doors would open and not lock down? If this is the case, then why would Caleb still be locked in the house? Although the helicopter took Ava away, it is not 100% certain that Caleb died and did not escape (through the doors that opened).
                                          This is a little confusing to me, since Caleb said he reversed the door action, yet at the end we see him trying to breach a locked door? Did he not say the doors would unlock upon the 10:00 staged power cut?
                                          Help me out here.
                                          I really don't like talking about my flair.

                                          1 Reply Last reply
                                          0

                                          • Login

                                          • Don't have an account? Register

                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Users
                                          • Groups