Official Forum for Programming in Objective-C (the iPhone Programming Language) - Stephen Kochan
June 21, 2018, 08:38:22 PM *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
 
   Home   Help Search Login Register Chat  
Pages: [1]   Go Down
  Print  
Author Topic: A theoretical question on creating an instance  (Read 4989 times)
jonr
Full Member
***
Posts: 155


« on: June 29, 2014, 02:27:46 PM »

This question is sort of a question like 'why is the sky blue'?  Also, it doesn't seem specific to Objective-C; from some C++ code I've seen, this would apply to that OO language as well, so perhaps instantiation works this way in all other languages.

When we create an instance we start out by doing Class *myInstance.  myInstance is a pointer; it stores a reference (memory address) to where the Fraction object's data is.  It doesn't actually directly store the instance's data.

So the big question is this:  Why does this have to work this way?  Why does instantiation have to be accomplished using indirection?  Why can't *myInstance be a regular non-pointer variable (e.g. 'myInstance') that directly stores the data rather than being a pointer that stores a reference to the data location? 
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #1 on: June 29, 2014, 11:02:46 PM »

The reason class instances are accessed through a pointer is that the memory where the instance will be stored is not known at compile time, it is known only when the instance is actually created. The memory used to store the instance is allocated from the heap and the allocation process returns the starting address for that block of memory. That starting address is what gets stored in the instance's variable and is therefore a pointer.

Does this help?
Logged
jonr
Full Member
***
Posts: 155


« Reply #2 on: July 19, 2014, 12:16:03 PM »

Brian,
Thanks very much for the answer; it does help.  However, it also causes more questions to form.  Btw, I was going to respond earlier but had to think about your answer a bit and then got sidetracked.  I don't have any formal CS training, and any coding experience I have has been obtained through self study.  Thus I'm pretty inexperienced with memory architecture but have read a little bit about it.  Here are some follow-up questions.

1. When you say the memory where the instance is stored is not known at compile time, can I take that to mean that it is only known at run-time?  I guess another way to ask this is this:  Since the memory is only known when the instance is actually created, I take that to mean that the instance is created at run-time (and not compile time).  Is my understanding of all this correct?

2. When you say 'allocated from the heap', I suppose that means as opposed to being allocated from the stack?  Do I have this correctly?

3. This relationship you outlined, I guess this pretty much outlines instantiation in other object oriented programming languages as well?  In other words, is it safe to say that this is a pretty universal situation with respect to object creation and memory storage?

4.  This last question is a bit tricky but I'll try:  The question is related to cause/effect.  You stated that the memory is not known at compile time; it's only known when the object is created.  Hence this necessitates the memory being allocated from the heap and thus a pointer is used.  So, I can see that the *cause* is the timeline when the memory is known (not known at compile time) and that the *effect* of that is the need for a pointer.  So, if I have the cause/effect components understood correctly here's a more obnoxiously detailed  Tongue question:  I want to get to the root cause:  *Why then* is the memory not known at compile time?  It is this fact, that causes all of the rest to follow.

thanks,
jonR
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #3 on: July 19, 2014, 12:38:14 PM »

Quote
1. When you say the memory where the instance is stored is not known at compile time, can I take that to mean that it is only known at run-time?  I guess another way to ask this is this:  Since the memory is only known when the instance is actually created, I take that to mean that the instance is created at run-time (and not compile time).  Is my understanding of all this correct?
Yes.

Quote
2. When you say 'allocated from the heap', I suppose that means as opposed to being allocated from the stack?  Do I have this correctly?
Yes. Memory is never allocated from the stack. Stack memory gets used by the program as it runs. When you call a function, all the parameters passed to the function get stored on the stack along with information needed to return to the original function. You as a programmer have no control over stack memory.

Quote
3. This relationship you outlined, I guess this pretty much outlines instantiation in other object oriented programming languages as well?  In other words, is it safe to say that this is a pretty universal situation with respect to object creation and memory storage?
This is a kind of yes and no answer. How the programmer sees the memory used by instantiation is dependent upon the language being used. In Apple's new Swift language, objects are treated programmatically just like any other variable. There is no alloc or init method to be called by the programmer, it is taken care of by the compiler/runtime. You simply assign a new object to a variable name just as you would an int or a float, etc.

Quote
4.  This last question is a bit tricky but I'll try:  The question is related to cause/effect.  You stated that the memory is not known at compile time; it's only known when the object is created.  Hence this necessitates the memory being allocated from the heap and thus a pointer is used.  So, I can see that the *cause* is the timeline when the memory is known (not known at compile time) and that the *effect* of that is the need for a pointer.  So, if I have the cause/effect components understood correctly here's a more obnoxiously detailed  Tongue question:  I want to get to the root cause:  *Why then* is the memory not known at compile time?  It is this fact, that causes all of the rest to follow.
Because the program cannot know in advance how it is to be executed, especially when user interaction is involved. For instance, when running Word or Photoshop or any other program that allows the user to create new documents or open existing ones, how many documents will be open at any given time and when will they be opened? There is no way for the compiler to predict this so there is no way for it to be able to allocate the necessary memory for those documents at run time. The memory for a document can only be allocated at the time it is created or opened. For this reason there is no way to know in advance where in memory it is to be stored.

I don't know if you remember when programs were given only a specific amount of memory in which to run and if you tried to do to much with them you could get "out of memory" errors. This happened when the stack memory and the heap memory collided. Think of the memory as a column. Stack memory grows from the bottom up while heap memory is allocated from the top down. With a fixed amount of memory, only so many objects can be created and so many functions can be called in a nested fashion, with one growing up and the other growing down they eventually run into one another and there is no more memory for the program to run in. Thankfully, today's modern operating systems no longer restrict programs to a specific, limited amount of memory in which to run and the memory the program needs can be allocated dynamically while running. Guess how it is handled? You got it, through pointers maintained by the OS and the runtime. Smiley
« Last Edit: July 19, 2014, 12:45:10 PM by BrianLawson » Logged
jonr
Full Member
***
Posts: 155


« Reply #4 on: July 23, 2014, 02:51:15 PM »

Brian,

Thanks again for the follow-up with the additional info; it's much appreciated.  I'm all clear on the first two questions, but had a few more about #3 and #4.

Quote
3. ME: This relationship you outlined, I guess this pretty much outlines instantiation in other object oriented programming languages as well?  In other words, is it safe to say that this is a pretty universal situation with respect to object creation and memory storage?
YOU: This is a kind of yes and no answer. How the programmer sees the memory used by instantiation is dependent upon the language being used. In Apple's new Swift language, objects are treated programmatically just like any other variable. There is no alloc or init method to be called by the programmer, it is taken care of by the compiler/runtime. You simply assign a new object to a variable name just as you would an int or a float, etc.

I should have been more clear; I was really only thinking about languages of an older timeframe than Swift and other C variants such as Objective-C is.  For example, C++ and C#.  So for C++ and C#, pointers are also needed/used for instantiation, just as they are in Objective-C?  I'm pretty sure from what you indicated the answer to this would be yes, but just wanted to make sure.

Quote
4. ME:  This last question is a bit tricky but I'll try:  The question is related to cause/effect.  You stated that the memory is not known at compile time; it's only known when the object is created.  Hence this necessitates the memory being allocated from the heap and thus a pointer is used.  So, I can see that the *cause* is the timeline when the memory is known (not known at compile time) and that the *effect* of that is the need for a pointer.  So, if I have the cause/effect components understood correctly here's a more obnoxiously detailed  Tongue question:  I want to get to the root cause:  *Why then* is the memory not known at compile time?  It is this fact, that causes all of the rest to follow.

YOU: Because the program cannot know in advance how it is to be executed, especially when user interaction is involved. For instance, when running Word or Photoshop or any other program that allows the user to create new documents or open existing ones, how many documents will be open at any given time and when will they be opened? There is no way for the compiler to predict this so there is no way for it to be able to allocate the necessary memory for those documents at run time. The memory for a document can only be allocated at the time it is created or opened. For this reason there is no way to know in advance where in memory it is to be stored.

I don't know if you remember when programs were given only a specific amount of memory in which to run and if you tried to do to much with them you could get "out of memory" errors. This happened when the stack memory and the heap memory collided. Think of the memory as a column. Stack memory grows from the bottom up while heap memory is allocated from the top down. With a fixed amount of memory, only so many objects can be created and so many functions can be called in a nested fashion, with one growing up and the other growing down they eventually run into one another and there is no more memory for the program to run in. Thankfully, today's modern operating systems no longer restrict programs to a specific, limited amount of memory in which to run and the memory the program needs can be allocated dynamically while running. Guess how it is handled? You got it, through pointers maintained by the OS and the runtime. Smiley

What you are saying makes perfect sense and I think my original thinking and need for this clarification is based on the simple examples we are working through in the book so far.  They are really simple abstractions and are used/needed to teach us the basic concepts and syntax.  You cited an example of user interaction and typical uses such as opening documents, etc.  I can see how all of these are examples of a program not knowing how it is going to be executed.  You actually stated that a program doesn't know how it is going to be run and made a point to say *especially* when user interaction is involved.  In our case of these simple examples in the book there is no user interaction.  After the objects are created, we assign the methods that we've created and put them right in our program file (main.m).  We are creating simple programs to be run in terminal windows which also shows the level of abstraction we are working in compared to real world programs.  So, I guess my thinking when trying to look at the big picture has been hijacked by the immediate reality of these simple examples.  For instance, I may want to think, 'Ok then, how does the program *not know* how it is to be used in these simple examples?  We've created the objects and applied methods to them.  What is 'unknown' about that, and why therefore do we need pointers and why are these all things that are only known at runtime?  What is flawed about my thinking here?  Is the flaw based on the following?: That in these simple programs, the use of the instances and the application of their methods, that we code in before we run the program, are essentially functionally the same as user interaction performed while the program is running?  You made a point to mention 'user interaction' so I thought that I'd use that in the example.  Btw, you mentioned 'user interaction' as seemingly a prime example of how a program may not know how it will be run.  What would be an example of that that *doesn't* involve user interaction? 

Let me take a step back and try to answer my last question, anticipating what you may say:  Perhaps the reason I'm asking for an example that doesn't involve *user interaction* is that I'm incorrectly looking at our examples that we code into our simple programs as being something that isn't along the lines of user interaction, when really it's the same thing, with regards to runtime.  And to take an even bigger overview:  Perhaps I've had to ask you further explain #4 because I'm overlooking something very obvious:  Anything that is inside main.m, any statement that is evaluated / run is being done so at runtime and whether it's user input or anything pre-coded in the program is not important...it's all the same.

thx,
jonR
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #5 on: July 23, 2014, 04:21:10 PM »

Quote
#3 I should have been more clear; I was really only thinking about languages of an older timeframe than Swift and other C variants such as Objective-C is.  For example, C++ and C#.  So for C++ and C#, pointers are also needed/used for instantiation, just as they are in Objective-C?  I'm pretty sure from what you indicated the answer to this would be yes, but just wanted to make sure.
I don't know C# so I can't speak to it. C++ is definitely the same, memory must be allocated and the class initialized before use. Also, I misspoke about Swift, it does not have any alloc method but every data member within the class must be initialized within an init method the programmer writes.

#4) These are good questions but they are advanced topics dealing with compiler design and computational theory. Ask anyone who has taken comp theory what they thought about the class, you're not likely to find many positive responses, it's a tough class. Smiley

An example of a program that does not involve user interaction would be one that reads a data file of some sort then runs some kind of analysis on that data. For instance, when I worked on the Space Shuttle Launch Processing System, we had a program that analyzed fuel flow rates through the pipes for filling the hydrogen and oxygen tanks in the Shuttle's external tank. It took into account the temperature on each side of the pipe (one side is in the sun, the other is shaded causing a temperature differential within the pipe) to figure out how much the liquid fuels would expand while being pumped through the pipes in order to calculate the flow rate. That did not involve any user interaction beyond starting the program's execution.

As for the compiler being able to predict the operation of the program it is compiling, to do what you suggest boils down to needing to have another program that is designed to accept your program as input and analyze its operation. This is where computational theory comes in and a lot of the class is spent on this topic. Anyhow, that goes beyond what compilers can do today although at least Xcode is starting to get some of that analysis as tools you can run separately. Basically, when it comes to how a language handles the creation and use of objects, it all boils down to compiler design.

You mentioned "languages of an older timeframe… [like] C++ and C#". Even Objective-C is an older langue now having been designed in the early '80s using C as a base language and incorporating features of Smalltalk for the object oriented portions of the language. Newer languages like Python and Swift are getting better at making things simpler for the programmer and moving away from the explicit need for pointers.
Logged
jonr
Full Member
***
Posts: 155


« Reply #6 on: July 23, 2014, 04:44:05 PM »

Brian,
Thanks again for all of the info. Yes, I know that these questions are 'over my pay grade'  Tongue and not really necessary for me to know now early in my coding career.  However, sometimes I get a little too curious for my own good.

The specific examples of user interaction vs non-user interaction (reading in of data files), all make perfect sense.  But I think my earlier misunderstanding of all of this was a bit wider.  Now that you have clarified a lot of things, I'm wondering if you could give me a final validation of whether I understand  *where* I was not thinking correctly about all of this before.

In my last posting, I brought up the issue of our simple examples in the book's exercises up to this point abstracting what's really going on because the examples are too simple.  Obviously, I'm not criticizing the book; the examples at this beginner stage have to be simple, but the downside is that their simplicity gets in the way of fully understanding what's going on.  Due to the simplicity of the examples, I was not really understanding why the compiler wouldn't know what's going on in our little programs.  All we're doing is creating an object and 'doing stuff' to it with our methods.  But where I was not thinking correctly was understanding that what is going on in main.m, the actual program that is being executed, is RUNTIME activities.  This is regardless of whether there is user interaction or not.  Is this where I was going wrong in my thinking?  In other words, I wasn't thinking correctly about what is actually going on when you run main.m.  Am I on the correct track now?
thanks,
jonR
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #7 on: July 23, 2014, 05:07:57 PM »

Yes Jon, I think you are on the right track. One thing to keep in mind is that the definition of what a compiler's responsibilities are has changed over the years as computer processing power has increased. Their primary function is to create executable code from the source code and at first that was all they did. With more processing power comes the ability to start doing some predictive analysis to help steer the programmer away from potential logic problems. Swift has made some progress in this area from what I've read. Some. Wink Full blown analysis of a program's execution is still outside the realm of a compiler but newer tools are becoming available to help with that as well. After you finish with this book, I'd recommend Xcode 5 Unleashed by Fritz Anderson, he introduces some of the tools that come with Xcode. While it has code in it, the book is less about writing programs and more about how to use the Xcode IDE to get your programming accomplished.
Logged
jonr
Full Member
***
Posts: 155


« Reply #8 on: July 24, 2014, 07:45:05 PM »

Brian,
Thanks again.  I'll definitely check out the book recommendation.  I really appreciate you taking the time to explain some of these concepts.  Although I really enjoy doing self study, one of the downsides of that is not having the ability to raise your hand in a classroom where you have F2F interaction.  Being able to do that really helps in issues that are a bit 'off the beaten path' such as these that we've been discussing.  But, with that said, I appreciate your instruction through this medium.  Have a great rest of the summer!
Cheers,

jonR
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #9 on: July 24, 2014, 08:43:12 PM »

You're welcome Jon, I'm glad I could help.
Logged
jonr
Full Member
***
Posts: 155


« Reply #10 on: July 26, 2014, 04:25:48 PM »

Brian,

I had one more question on the book recommendation: Xcode 5 Unleashed by Fritz Anderson.  Are you sure this book exists for Xcode 5?  I only could find 'Xcode 4 Unleashed'.  Fritz Anderson does have an Xcode 5 book but it's called 'Xcode 5 Start to Finish'.  Is that the one you may be referring to?

Also, while I have your attention  Smiley I was wondering if I could ask you another question about something minor in chapter 3.  At the bottom of page 44, after the Program 3.3 output listing, there is a bit of an explanation.  It talks about two objects that were created:  frac1 and frac2.  Both of these objects have instance variables named numerator and denominator.  However, the book calls frac1 an instance variable in the last line of the page:  '....The instance variable frac1 gets its instance variable numerator set to 2'.  This seems a bit odd; is it a typo?  These are two different types of data; one is an object (frac1), while the other is an instance variable (numerator) of an object.  Is the term instance variable meant to be used loosely like this in Objective-C, or is this a mistake in the book?  I'm thinking that since the object frac1 is a pointer (*frac1) programmatically speaking, it's not incorrect to refer to frac1 as a variable.  However, using the term instance variable in this case seems inconsistent based on how the book has used the term so far to describe numerator and denominator both of which are instance variables of objects.
thx,
jonR                                                               
Logged
BrianLawson
Sr. Member
****
Posts: 262


Email
« Reply #11 on: July 26, 2014, 05:26:40 PM »

Yes, you're right, the book was renamed for version 5 of Xcode. Sorry for the confusion.

I think he may be using the terminology a bit loosely here. frac1 is still a variable since it can point to any instance of a Fraction object you want, but to call it an instance variable is confusing. The term instance variable is usually reserved for a variable defined inside a class.
Logged
Pages: [1]   Go Up
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.11 | SMF © 2006-2009, Simple Machines LLC Valid XHTML 1.0! Valid CSS!
Entire forum contents ゥ 2009 classroomM.com. All rights reserved.