OOPs Concept

Posted by swapnil.... at 7:16 AM

Monday, January 28, 2008

Object-oriented programming (OOP) is a computer science term used to characterize a programming language that began development in the 1960’s. The term ‘object-oriented programming’ was originally coined by Xerox PARC to designate a computer application that describes the methodology of using objects as the foundation for computation. By the 1980’s, OOP rose to prominence as the programming language of choice, exemplified by the success of C++. Currently, OOPs such as Java, J2EE, C++, C#, Visual Basic.NET, Python and JavaScript are popular OOP programming languages that any career-oriented Software Engineer or developer should be familiar with.

OOP is widely accepted as being far more flexible than other computer programming languages. OOPs use three basic concepts as the fundamentals for the programming language: classes, objects and methods. Additionally, Inheritance, Abstraction, Polymorphism, Event Handling and Encapsulation are also significant concepts within object-oriented programming languages that are explained in online tutorials describing the functionality of each concept in detail.

High-end jobs with well-paid salaries are offered around the world for experts in software development using object-oriented programming languages. The OOPs concepts described and explained in easy to use, step-by-step, practical tutorials will familiarize you with all aspects of OOPs.

C Programming Tips and Tricks

Posted by swapnil.... at 7:06 AM

C Programming Tips and Tricks

Abstract
This document is the result of me being thrown back into C programming after a few years in absense. When I was in school I learnt basic C, C++ and programming under windows. In my current job I was required to re-write some perl code in C. Following are some things that I stumbled on and that I feel others could benifit from. Learn through my failures so to speak :)

Conversions

Be careful when doing conversions from strings to numbers, and ensure that you are using the correct function. At one point I wrote some code that looked like:

unsigned long bytes;
...
bytes = (unsigned long)atoi( s ); // save bytes for later

Even though I was casting as a long, the atoi() function converts to int, and not long. The code worked for a fair amount of time, until the value stored in s grew more than the size of an int, and then suddenly 0 was returned. The atoi() will not produce an error on fail, but simply bails and returns 0. To solve this problem use atol() instead.

Fork()ing, Daemons and Servers

At some point you will be called to write a server, daemon, or some program that simply sits and waits for input.

To make a program put itself in the background and going into "daemon mode" do the following:

void main( int argc, char** argv )
{
pid_t pid = 0;
/* set stuff up */
/* accept command line args */

pid = fork();
if( pid == 0 ) {
/* this is the "background" process. Execute process loop here */
}
else {
/* "foreground" process exits */
exit(0);
}
}

This makes the program duplicate itself into a "parent" and "child" process. The PID returned for the child will be 0, while the parents PID will remain the same (something != 0). If the PID is 0, continue to process, if not, exit. The program will run in the background just like a good daemon should!

Some other tips for doing this would be:

  • Supply command line arguments such as "-debug" and "-nofork". The first one does no forking into the background and prints any debug messages, while the second one prints no debug messages, but does not fork. This is especially handy if you have a lot of debug messages, but are having specific problems such as segfaults or some other error that does not show up when put into daemon mode. I had one nasty bug due to a ill-defined hostname. When I ran my daemon, it simply exited (as it should) but nothing was left running in the background! Running it with -nofork showed me the error message immediately.
  • Use while(1){...} as your main process loop, not for(;;){...}. Both do exactly the same thing, but the while(1) is more intuitive for someone (perhaps even you) reading the code in the future.

Signals

Mentioned later, sometimes you'll want a program to exit cleanly, instead of killing it with CTRL-C. This is fine for normal programs, which run from start to finish, but what about servers, which are designed to run forever? Try adding something like this:

#include 

static void catch_sigint( int );

int main( int argc, char** argv ) {
struct sigaction sa_old;
struct sigaction sa_new;

// set up signal handling
sa_new.sa_handler = catch_sigint;
sigemptyset(&sa_new.sa_mask);
sa_new.sa_flags = 0;
sigaction(SIGINT, &sa_new, &sa_old );

/* continue */
while(1) {
/* process loop that goes on forever */
}
}

static void catch_sigint( int signo ) {
Log("Caught SIGINT, exiting...");
/* final cleanup stuff */
exit (1) ;
}

In this case, the program will loop forever, until you hit CTRL-C (SIGINT) and then it will execute the catch_sigint() function and do whatever you have in it. You could also use this to make sure your program does not die on the user hitting CTRL-C.

Debugging

There are many different ways to do debugging, and many different methodologies to it. Some things I suggest (and your milage may vary) would be:

  • Lots of printf() statements. C Elements of Style suggests prefixing all comments with "##" for easy search and removal. This is a good idea. I also like to keep lots of if( debug ) printf("...")'s around as well, so that when my program is started with the "-debug" flag, useful information is displayed. Either way, if you don't know where something is crashing, or what the value is in a variable (never mind what it should be), use a printf().
  • Remove large sections of code for with #ifdef QQQ. This is another suggestion from C Elements of Style, and a good one. Instead of commenting out, or worse, deleting, wrap it as:

     #ifdef QQQ
    /* code that you don't want to run */
    #endif QQQ

    A couple of notes:

    • If you don't like QQQ (suggested for the fact there is little chance of it being defined and it's search-ability), use #ifdef UNDEF, which should never be defined!
    • The QQQ on the #endif is not needed, and is only there for a visual reminder of what you are ending. Some older compilers may puke on this however. If they do, just stick with plain old #endif
  • You can resort to using exit(0) to mark points of arrival. If you have some particularily odd code (graphics, games or something) and you are having trouble tracking down exactly where the code is getting to, stick an exit(0) in. If the program exits with error code of 0, you'll know it hit your statement. Linus Torvalds used a simliar technique when creating the first linux video drivers in assembler. He'd put reboot commands in and if the system just froze he knew his reboot statement hadn't been reached.

gdb

This is the Gnu DeBugger. It rocks. I don't know a whole lot about it, but here is what I do with it.

Compile your programs with -ggdb this makes a large binary, but the size is because of extra information included specifically for gdb.

Run your program with $ gdb ./progname. You'll be presented with a (gdb) prompt. Type run -args (where -args are any command line args that you'd like to pass to "progname"). Gdb also mimicks a shell in that typing "make" at the (gdb) prompt will execute make in the current directory. Result? If you have a makefile it gets run. Other result? One less xterm needed for development.

When in the shell, your program could puke on a segfault. If so, type "where" or "bt" (for backtrace) and you'll get a list of the tree of commands executed, ending with the one causing the segfault. Something like:

(gdb) r
Starting program: /home/alan/code/ulisten/./test
Hostname Enterprise
gethostbyname: Unknown host

Program received signal SIGSEGV, Segmentation fault.
0x804863a in get_my_ip () at test.c:36
36 return( inet_ntoa( *(struct in_addr *)hp->h_addr ) );
(gdb) where
#0 0x804863a in get_my_ip () at test.c:36
#1 0x8048662 in main (argc=1, argv=0xbffff904) at test.c:40
#2 0x40042bcc in __libc_start_main () from /lib/libc.so.6
(gdb)

This shows you that at line 36 in test.c something went wrong. This happened in get_my_ip which was called from main() on line 40, and so on.... If you're doing hardcore stuff and have the debug versions of libc you'll get information on where stuff broke in there too. You'll see that the only line numbers displayed were in the test program, as that's the only thing compiled with -ggdb.

Electric Fence

Electric fence (ftp://ftp.perens.com/pub/ElectricFence) is a malloc() debugging library written by Bruce Perens. It rocks. Basically how it works is this... When you allocate memory it allocates protected memory just after. If you have a fencepost error (running off the end of an array) your program will immediately exit with a protection error. Combining this with gdb you can track down most memory errors (YMMV of course :)

There are two options you can use with Electrict Fence, both controlled by environmental variables:

EF_PROTECT_BELOW=1
EF_PROTECT_FREE=1

If EF_PROTECT_BELOW is set to 1 then Electric Fence will allocate protected memory below your malloc()'d memory as well as above. This will help protect you from running off the other side of your arrays. EF_PROTECT_FREE, when set to 1, will set areas of memory that you free() with protected memory, thus not allowing you to use memory that has been free()d. This helped me fix a potentially nasty bug in something like the following code:

SafeStrncpy( tokenbuffer, strstr( buffer, "=" )+1, MAXBUF );
tokenbuffer[MAXBUF - 1] = '\0';
if( tokenbuffer ) {
free( tokenbuffer );
}
ret = atoi( tokenbuffer );

Electric Fence immediately died when it hit the last line of code, as atoi() was trying to use free()d memory.

Combining Electric Fence with gdb, you can track down to exactly what line tried to access the protected memory.

Memwatch

This is another excellent utility for detecting memory leaks. Like electric fence, it replaces malloc() and free() (and calloc, and realloc, etc) with it's own functions that keep track of memory. Just compile in memwatch.c to your program (shown below) and when run it'll produce a report of leaked memory.

Note: this report will not work if the program crashes. It must exit cleanly to work (see section on signal handling for how to deal with this and servers/daemons).

You can retrieve memwatch from http://www.linkdata.se/sourcecode.html. It is very simple to use. Just compile the .c file in with your program with a couple of variables set.

Example Makefile:


CC=gcc
CFLAGS=-ggdb -Wall

test: test.o
$(CC) -o $@ $< -DMEMWATCH -DMEMWATCH_STDIO memwatch.o

test.o: test.c
$(CC) -c $*.c -DMEMWATCH -DMEMWATCH_STDIO memwatch.c

The above will compile the memwatch program with the MEMWATCH and MEMWATCH_STDIO flags set (which as far as I can tell, make it work). Full instructions are in the USING file in the distribution.

After you have run your program you'll find something like this in the file memwatch.log:


============ MEMWATCH 2.64 Copyright (C) 1992-1999 Johan Lindh =============

Started at Tue Oct 31 23:17:30 2000

Modes: __STDC__ 32-bit mwDWORD==(unsigned long)
mwROUNDALLOC==4 sizeof(mwData)==32 mwDataSize==32

NULL free: <290> test.c(1838), NULL pointer free'd

Stopped at Tue Oct 31 23:18:30 2000

unfreed: <31> test.c(1440), 8 bytes at 0x401a0ff4 {53 F5 CC 01 00 00 00 [deleted] }
unfreed: <4> test.c(1359), 132 bytes at 0x4019af78 {70 69 6E 67 73 00 00 [deleted] }

Memory usage statistics (global):
N)umber of allocations made: 147
L)argest memory usage : 8332
T)otal of all alloc() calls: 131968
U)nfreed bytes totals : 140

The contents are pretty self explanitory. If you free an already free'd or NULL pointer, it points it out. Ditto for unfreed memory. The section at the bottom shows you stats with how much memory was leaked, how much was used, total allocated, and so on.

Wednesday, January 23, 2008

Early developments

The initial development of C occurred at AT&T Bell Labs between 1969 and 1973; according to Ritchie, the most creative period occurred in 1972. It was named "C" because many of its features were derived from an earlier language called "B", which according to Ken Thompson was a stripped down version of the BCPL programming language.

The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Ritchie and Thompson, incorporating several ideas from colleagues. Eventually they decided to port the operating system to a PDP-11. B's lack of functionality to take advantage of some of the PDP-11's features, notably byte addressability, led to the development of an early version of the C programming language.

The original PDP-11 version of the Unix system was developed in assembly language. By 1973, with the addition of struct types, the C language had become powerful enough that most of the Unix kernel was rewritten in C. This was one of the first operating system kernels implemented in a language other than assembly. (Earlier instances include the Multics system (written in PL/I), and MCP (Master Control Program) for the Burroughs B5000 written in ALGOL in 1961.)

[edit] K&R C

In 1978, Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language. This book, known to C programmers as "K&R", served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". The second edition of the book covers the later ANSI C standard.

K&R introduced several language features:

  • standard I/O library
  • long int data type
  • unsigned int data type
  • compound assignment operators =op were changed to op= to remove the semantic ambiguity created by the construct i=-10, which had been interpreted as i =- 10 instead of the possibly intended i = -10

Even after the publication of the 1989 C standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.

In early versions of C, only functions that returned a non-integer value needed to be declared if used before the function definition; a function used without any previous declaration was assumed to return an integer, if its value was used.

For example:

long int SomeFunction();
/* int OtherFunction(); */

/* int */ CallingFunction()
{
long int test1;
register /* int */ test2;

test1 = SomeFunction();
if (test1 > 0)
test2 = 0;
else
test2 = OtherFunction();

return test2;
}

All the above commented-out int declarations could be omitted in K&R C.

Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if multiple calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.

In the years following the publication of K&R C, several unofficial features were added to the language, supported by compilers from AT&T and some other vendors. These included:

The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.

[edit] ANSI C and ISO C

During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.

In 1983, the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. In 1989, the standard was ratified as ANSI X3.159-1989 "Programming Language C." This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.

In 1990, the ANSI C standard (with a few minor modifications) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990. This version is sometimes called C90. Therefore, the terms "C89" and "C90" refer to essentially the same language.

One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the unofficial features subsequently introduced. However, the standards committee also included several new features, such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. The syntax for parameter declarations was also augmented to include the C++ style:

int main(int argc, char **argv)
{
...
}

although the K&R interface

int main(argc, argv)
int argc;
char **argv;
{
...
}

continued to be permitted, for compatibility with existing source code.

C89 is supported by current C compilers, and most C code being written nowadays is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.

In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to take advantage of features available only in Standard C.

#ifdef __STDC__
extern int getopt(int,char * const *,const char *);
#else
extern int getopt();
#endif

In the above example, a compiler which has defined the __STDC__ macro (as mandated by the C standard) only interprets the line following the ifdef command. In other, nonstandard compilers which don't define the macro, only the line following the else command is interpreted.

[edit] C99

Note: C99 is also the name of a C compiler for the Texas Instruments TI-99/4A home computer. Aside from being a C compiler, it is otherwise unrelated.

After the ANSI standardization process, the C language specification remained relatively static for some time, whereas C++ continued to evolve, largely during its own standardization effort. Normative Amendment 1 created a new standard for the C language in 1995, but only to correct some details of the C89 standard and to add more extensive support for international character sets. However, the standard underwent further revision in the late 1990s, leading to the publication of ISO 9899:1999 in 1999. This standard is commonly referred to as "C99." It was adopted as an ANSI standard in May 2000.

[edit] New features

C99 introduced several new features, many of which had already been implemented as extensions in several compilers:

[edit] Upward-compatibility with C90

C99 is for the most part upward-compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. The C standards committee decided that it was of more value for compilers to diagnose inadvertent omission of the type specifier than to silently process legacy code that relied on implicit int. In practice, compilers are likely to diagnose the omission but also assume int and continue translating the program.

[edit] Support by major compilers

GCC and other C compilers now support many of the new features of C99. However, there has been less support from vendors such as Microsoft and Borland that have mainly focused on C++, since C++ provides similar functionality improvement.

GCC, despite its extensive C99 support, is still not a completely compliant implementation; several key features are missing or don't work correctly.[6]

According to Sun Microsystems, Sun Studio Compiler Suite (which is freely downloadable) now supports the full C99 standard.[7]

[edit] Version detection

A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. As with the __STDC__ macro for C90, __STDC_VERSION__ can be used to write code that will compile differently for C90 and C99 compilers, as in this example that ensures that inline is available in either case.

#if __STDC_VERSION__ >= 199901L
/* "inline" is a keyword */
#else
# define inline /* nothing */
#endif

[edit] Uses

C's primary use is for "system programming", including implementing operating systems and embedded system applications, due to a combination of desirable characteristics such as code portability and efficiency, ability to access specific hardware addresses, ability to "pun" types to match externally imposed data access requirements, and low runtime demand on system resources.

C has also been widely used to implement end-user applications, although as applications became larger much of that development shifted to other, higher-level languages.

One consequence of C's wide acceptance and efficiency is that the compilers, libraries, and interpreters of other higher-level languages are often implemented in C.

C is used as an intermediate language by some implementations of higher-level languages, which translate the input language to C source code, perhaps along with other object representations. The C source code is compiled by a C compiler to produce object code. This approach may be used to gain portability (C compilers exist for nearly all platforms) or for convenience (it avoids having to develop machine-specific code generators). Some programming languages which use C this way are Eiffel, Esterel, Gambit, the Glasgow Haskell Compiler, Lisp dialects, Lush, Sather, Squeak, and Vala.

Unfortunately, C was designed as a programming language, not as a compiler target language, and is thus less than ideal for use as an intermediate language. This has led to development of C-based intermediate languages such as C--.

[edit] Syntax

Main article: C syntax

Unlike languages such as FORTRAN 77, C source code is free-form which allows arbitrary use of whitespace to format code, rather than column-based or text-line-based restrictions. Comments may appear either between the delimiters /* and */, or (in C99) following // until the end of the line.

Each source file contains declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }) to limit the scope of declarations and to act as a single statement for control structures.

As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if(-else) conditional execution and by do-while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used to leave the innermost enclosing loop statement or skip to its reinitialization. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression.

Expressions can use a variety of built-in operators (see below) and may contain function calls. The order in which operands to most operators, as well as the arguments to functions, are evaluated is unspecified; the evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement and the entry to and return from each function call. This permits a high degree of object code optimization by the compiler, but requires C programmers to exert more care to obtain reliable results than is needed for other programming languages.

[edit] C Operators

C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:

  • arithmetic
  • equality testing
  • order relations
  • boolean logic
  • bitwise logic
  • assignment
  • increment and decrement
  • reference and dereference
  • conditional evaluation
  • member selection
  • type conversion
  • object size
  • function argument collection
  • sequencing
  • subexpression grouping

[edit] Operator precedence and associativity

What follows is the list of C operators sorted from highest to lowest priority (precedence). Operators of same priority are presented on the same line. (That the post-increment operator (++) has higher priority than the dereference operator (*) means that an expression *p++ is grouped as *(p++) and not (*p)++. That the subtraction operator (-) has left-to-right associativity means that an expression a-b-c is grouped as (a-b)-c and not a-(b-c).)

Class Associativity Operators
Grouping Nesting (expr)
Postfix Left-to-Right (args) [] -> . expr++ expr--
Unary Right-to-Left ! ~ + − * & (typecast) sizeof ++expr --expr
Multiplicative Left-to-Right * / %
Additive Left-to-Right + -
Shift Left-to-Right << >>
Relational Left-to-Right < <= > >=
Equality Left-to-Right == !=
Bitwise AND Left-to-Right &
Bitwise XOR Left-to-Right ^
Bitwise OR Left-to-Right |
Logical AND Left-to-Right &&
Logical OR Left-to-Right ||
Conditional Right-to-Left ?:
Assignment Right-to-Left = += -= *= /= &= |= ^= <<= >>=
Sequence Left-to-Right ,

[edit] "Hello, world" example

The following simple application appeared in the first edition of K&R, and has become the model for an introductory program in most programming textbooks, regardless of programming language. The program prints out "hello, world" to the standard output, which is usually a terminal or screen display. Standard output might also be a file or some other hardware device, depending on how standard output is mapped at the time the program is executed.

main()
{
printf("hello, world\n");
}

The above program will compile on most modern compilers that are not in compliance mode, but does not meet the requirements of either C89 or C99. Compiling this program in C99 compliance mode will result in warning or error messages.[8] A compliant version of the above program follows:

#include 

int main(void)
{
printf("hello, world\n");
return 0;
}

What follows is a line-by-line analysis of the above program:

#include 

This first line of the program is a preprocessing directive, #include. This causes the preprocessor — the first tool to examine source code as it is compiled — to substitute the line with the entire text of the stdio.h file. The header file stdio.h contains declarations for standard input and output functions such as printf. The angle brackets surrounding stdio.h indicate that stdio.h can be found using an implementation-defined search strategy. Double quotes may also be used for headers, thus allowing the implementation to supply (up to) two strategies. Typically, angle brackets are reserved for headers supplied by the C compiler, and double quotes for local or installation-specific headers.

int main(void)

This next line indicates that a function named main is being defined. The main function serves a special purpose in C programs: The run-time environment calls the main function to begin program execution. The type specifier int indicates that the return value, the value of evaluating the main function that is returned to its invoker (in this case the run-time environment), is an integer. The keyword void as a parameter list indicates that the main function takes no arguments.[9]

{

This opening curly brace indicates the beginning of the definition of the main function.

    printf("hello, world\n");

This line calls (executes the code for) a function named printf, which is declared in the included header stdio.h and supplied from a system library. In this call, the printf function is passed (provided with) a single argument, the address of the first character in the string literal "hello, world\n". The string literal is an unnamed array with elements of type char, set up automatically by the compiler with a final 0-valued character to mark the end of the array (printf needs to know this). The \n is an escape sequence that C translates to the newline character, which on output signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used by the caller. (A more careful program might test the return value to determine whether or not the printf function succeeded.) The semicolon ; terminates the statement.

    return 0;

This line terminates the execution of the main function and causes it to return the integer value 0, which is interpreted by the run-time system as an exit code (indicating successful execution).

}

This closing curly brace indicates the end of the code for the main function.

[edit] Data structures

C has a static weak typing type system that shares some similarities with that of other ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, characters, and enumerated types (enum). C99 added a boolean datatype. There are also derived types including arrays, pointers, records (struct), and untagged unions (union).

C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a value in some other way.

[edit] Pointers

C supports the use of pointers, a very simple type of reference that records, in effect, the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment and also pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type. (See Array↔pointer interchangeability below.) Pointers are used for many different purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation, which is described below, is performed using pointers. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to functions are useful for callbacks from event handlers.

A null pointer is a pointer value that points to no valid location (it is often represented by address zero). Dereferencing a null pointer is therefore meaningless, typically resulting in a run-time error. Null pointers are useful for indicating special cases such as no next pointer in the final node of a linked list, or as an error indication from functions returning pointers.

Void pointers (void *) point to objects of unknown type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.

[edit] Arrays

Array types in C are always one-dimensional and, traditionally, of a fixed, static size specified at compile time. (The more recent C99 standard also allows a form of variable-length arrays.) However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array. C's unification of arrays and pointers (see below) means that true arrays and these dynamically-allocated, simulated arrays are virtually interchangeable. Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although the compiler may provide bounds checking as an option. Array bounds violations are therefore possible and rather common in carelessly written code (see the "Criticism" article), and can lead to various repercussions: illegal memory accesses, corruption of data, buffer overrun, run-time exceptions, etc.

C does not have a special provision for declaring multidimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multidimensional array" can be thought of as increasing in row-major order.

[edit] Array↔pointer interchangeability

A distinctive (but potentially confusing) feature of C is its treatment of arrays and pointers. The array-subscript notation x[i] can also be used when x is a pointer; the interpretation (using pointer arithmetic) is to access the (i+1)th of several adjacent data objects pointed to by x, counting the object that x points to (which is x[0]) as the first element of the array.

Formally, x[i] is equivalent to *(x + i). Since the type of the pointer involved is known to the compiler at compile time, the address that x + i points to is not the address pointed to by x incremented by i bytes, but rather incremented by i multiplied by the size of an element that x points to. The size of these elements can be determined with the operator sizeof by applying it to any dereferenced element of x, as in n = sizeof *x or n = sizeof x[0].

Furthermore, in most expression contexts (a notable exception is sizeof array), the name of an array is automatically converted to a pointer to the array's first element; this implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although C's function calls use pass-by-value semantics, arrays are in effect passed by reference.

The number of elements in a declared array a can be determined as sizeof a / sizeof a[0].

An interesting demonstration of the interchangeability of pointers and arrays is shown below. The four assignments are equivalent and each is valid C code. Note how the last line contains the strange code i[x] = 1;, which has the index variable i apparently interchanged with the array variable x. This last line might be found in obfuscated C C code.

/* x designates an array */
x[i] = 1;
*(x + i) = 1;
*(i + x) = 1;
i[x] = 1; /* strange, but correct: i[x] is equivalent to *(i + x) */

However, there is a distinction to be made between arrays and pointer variables. Even though the name of an array is in most expression contexts converted to a pointer (to its first element), this pointer does not itself occupy any storage. Consequently, you cannot change what an array "points to", and it is impossible to assign to an array. (Arrays may however be copied using the memcpy function, for example.)

[edit] Memory management

One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three distinct ways to allocate memory for objects:

  • Static memory allocation: space for the object is provided in the binary at compile-time; these objects have an extent (or lifetime) as long as the binary which contains them is loaded into memory
  • Automatic memory allocation: temporary objects can be stored on the stack, and this space is automatically freed and reusable after the block in which they are declared is exited
  • Dynamic memory allocation: blocks of memory of arbitrary size can be requested at run-time using library functions such as malloc from a region of memory called the heap; these blocks persist until subsequently freed for reuse by calling the library function free

These three approaches are appropriate in different situations and have various tradeoffs. For example, static memory allocation has no allocation overhead, automatic allocation may involve a small amount of overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. On the other hand, stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.

Where possible, automatic or static allocation is usually preferred because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. Unfortunately, many data structures can grow in size at runtime, and since static allocations (and automatic allocations in C89 and C90) must have a fixed size at compile-time, there are many situations in which dynamic allocation must be used. Prior to the C99 standard, variable-sized arrays were a common example of this (see "malloc" for an example of dynamically allocated arrays).

[edit] Libraries

The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. In order for a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "math library").

The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation. ("Freestanding" [embedded] C implementations may provide only a subset of the standard library.) This library supports stream input and output, memory allocation, mathematics, character strings, and time values.

Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.

Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.

[edit] Criticism

Despite its popularity, C has been widely criticized. Such criticisms fall into two broad classes: desirable operations that are too hard to achieve using unadorned C, and undesirable operations that are too easy to accidentally invoke while using C. Putting this another way, the safe, effective use of C requires more programmer skill, experience, effort, and attention to detail than is required for some other programming languages.

[edit] Tools for mitigating issues with C

Tools have been created to help C programmers avoid some of the problems inherent in the language.

Automated source code checking and auditing are beneficial in any language, and for C many such tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler.

There are also compilers, libraries and operating system level mechanisms for performing array bounds checking, buffer overflow detection, serialization and automatic garbage collection, that are not a standard part of C.

Cproto is a program that will read a C source file and output prototypes of all the functions within the source file. This program can be used in conjunction with the make command to create new files containing prototypes each time the source file has been changed. These prototype files can be included by the original source file (e.g., as "filename.p"), which reduces the problems of keeping function definitions and source files in agreement.

It should be recognized that these tools are not a panacea. Because of C's flexibility, some types of errors involving misuse of variadic functions, out-of-bounds array indexing, and incorrect memory management cannot be detected on some architectures without incurring a significant performance penalty. However, some common cases can be recognized and accounted for.

[edit] Related languages

When object-oriented languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as preprocessors -- source code was translated into C, and then compiled with a C compiler.

[edit] C++

Main article: C++

Bjarne Stroustrup devised the C++ programming language as one approach to providing object-oriented functionality with C-like syntax. C++ adds greater typing strength, scoping and other tools useful in object-oriented programming and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions (see Compatibility of C and C++ for an exhaustive list of differences).

[edit] D

Unlike C++, which maintains nearly complete backwards compatibility with C, D makes a clean break with C while maintaining the same general syntax. It abandons a number of features of C which the designer of D considered undesirable, including the C preprocessor and trigraphs, and adds some, but not all, of the extensions of C++.

[edit] Objective-C

Main article: Objective-C

Objective-C is a very "thin" layer on top of, and is a strict superset of, C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations and function calls is inherited from C, while the syntax for object-oriented features is taken from Smalltalk.

[edit] Other influences

C has directly or indirectly influenced many later languages such as Java, C#, Perl, PHP, JavaScript, and Unix's C Shell. The most pervasive influence has been syntactical: all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models and/or large-scale program structures that differ from those of C, sometimes radically.

[edit] See also

[edit] Notes

  1. ^ Dennis M. Ritchie (Jan 1993). The Development of the C Language. Retrieved on Jan 1, 2008. “The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol's adherents would approve of.”
  2. ^ C was used to rewrite an early version of Unix that had been written in assembler. History of the C Programming Language. Retrieved on 2006-10-31.
  3. ^ Patricia K. Lawlis, c.j. kemp systems, inc. (1997). Guidelines for Choosing a Computer Language: Support for the Visionary Organization. Ada Information Clearinghouse. Retrieved on 2006-07-18.
  4. ^ Choosing the right programming language. Wikibooks (2006). Retrieved on 2006-07-18.
  5. ^ See Generational list of programming languages
  6. ^ Status of C99 features in GCC. Free Software Foundation, Inc. (2007-11-22). Retrieved on 2008-01-09.
  7. ^ Sun Studio 12: C Compiler 5.9 Readme. Sun Microsystems, Inc. (2007-05-31). Retrieved on 2008-01-09.
  8. ^ For C89, a diagnostic message is not required, but often one will be issued anyway.
  9. ^ The main function actually has two arguments, int argc and char *argv[], respectively, which can be used to handle command line arguments. The C standard requires that both forms of main be supported, which is special treatment not afforded any other function.

[edit] References

Posted by swapnil.... at 7:51 AM

A Journey Through Programming Language Generations

Article 2 byMohamad Johan Bin Mohd. Nasir



Programming languages have evolved tremendously since early 1950's and this evolution has resulted in over hundreds of different languages being invented and used in the industry. This revolution is needed as we can now instruct computers more easily and faster than ever before due to technological advancement in hardware with fast processors like the 200MHz Pentium Pro developed by Intel�. The increase in quantities and speed powerful computers are being produced now, more capable of handling complex codes from different languages from this generation, like Appware and PROLOG, will prompt language designers to design more efficient codes for various applications. This article will be going down memory lane to look at past five generations of languages and looking at how they revolutionise the computer industry.

We start out with the first and second generation languages during the period of 1950-60, which to many experienced programmers will say are machine and assembly languages. Programming language history really began with the work of Charles Babbage in the early nineteenth century who developed automated calculation for mathematical functions. Further developments in early 1950 brought us machine language without interpreters and compilers to translate languages. Micro-code is an example of the first generation language residing in the CPU written for doing multiplication or division. Computers then were programmed in binary notation that was very prone to errors. A simple algorithm resulted in lengthy code. This was then improved to mnemonic codes to represent operations.

Symbolic assembly codes came next in the mid 1950's, the second generation of programming language like AUTOCODER, SAP and SPS. Symbolic addresses allowed programmers to represent memory locations, variables and instructions with names. Programmers now had the flexibility not to change the addresses for new locations of variables whenever they are modified. This kind of programming is still considered fast and to program in machine language required high knowledge of the CPU and machine's instruction set. This also meant high hardware dependency and lack of portability. Assembly or machine code could not run on different machines. Example, code written for the Intel� Processor family would look very different for code written for the Motorola 68X00 series. To convert would mean changing a whole length of code.

Throughout the early 1960's till 1980 saw the emergence of the third generation programming languages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of this and were considered as high level languages. Most of this languages had compilers and the advantage of this was speed. Independence was another factor as these languages were machine independent and could run on different machines. The advantages of high level languages include the support for ideas of abstraction so that programmers can concentrate on finding the solution to the problem rapidly, rather than on low-level details of data representation. The comparative ease of use and learning, improved portability and simplified debugging, modifications and maintenance led to reliability and lower software costs.

These languages were mostly created following von Neumann constructs which had sequential procedural operations and code executed using branches and loops. Although the syntax between these languages were different but they shared similar constructs and were more readable by programmers and users compared to assembly languages. Some languages were improved over time and some were influenced by previous languages, taking the desired features thought to be good and discarding unwanted ones. New features were also added to the desired features to make the language more powerful.

COBOL (COmmon Business-Oriented Language), a business data processing language is an example of a language constantly improving over the decades. It started out with a language called FLOWMATIC in 1955 and this language influenced the birth of COBOL-60 in 1959. Over the years, improvements were done to this language and COBOL 61, 65, 68, 70 were developed, being recognised as a standard in 1961. Now the new COBOL 97 has included new features like Object Oriented Programming to keep up with current languages. One good possible reason for this is the fact that existing code is important and to totally develop a new language from start would be a lengthy process. This also was the rationalisation behind the developments of C and C++.

Then, there were languages that evolved from other languages like LISP1 developed in 1959 for artificial intelligence work, evolving into the 1.5 version and had strong influences languages like MATHLAB, LPL and PL/I. Language like BALM had the combined influence of ALGOL-60 and LISP 1.5. These third generation languages are less processor dependent than lower level languages. An advantage in languages like C++ is that it gives the programmers a lot of control over how things are done in creating applications. This control however calls for more in depth knowledge of how the operating system and computer works. Many of the real programmers now still prefer to use these languages despite the fact the programmer having to devote a substantial professional effort to the leaning of a new complicated syntax which sometimes have little relation to human-language syntax even if it is in English.

Third generation languages often followed procedural code, meaning the language performs functions defined in specific procedures on how something is done. In comparison, most fourth generation languages are nonprocedural. A disadvantage with fourth generation languages was they were slow compared to compiled languages and they also lacked control. Powerful languages of the future will combine procedural code and nonprocedural statements together with the flexibility of interactive screen applications, a powerful way of developing application. Languages specifying what is accomplished but not how, not concerned with the detailed procedures needed to achieve its target like in graphic packages, applications and report generators. The need for this kind of languages is in line with minimum work and skill concept, point and click, programmers who are end users of software applications designed using third generation languages unseen by the commercial users. Programmers whose primary interests are programming and computing use third generation languages and programmers who use the computers and programs to solve problems from other applications are the main users of the fourth generation languages.

Features evident in fourth generation languages quite clearly are that it must be user friendly, portable and independent of operating systems, usable by non-programmers, having intelligent default options about what the user wants and allowing the user to obtain results fasts using minimum requirement code generated with bug-free code from high-level expressions (employing a data-base and dictionary management which makes applications easy and quick to change), which was not possible using COBOL or PL/I. Standardisation however, in early stages of evolution can inhibit creativity in developing powerful languages for the future. Examples of this generation of languages are IBM's ADRS2, APL, CSP and AS, Power Builder, Access.

The 1990's saw the developments of fifth generation languages like PROLOG, referring to systems used in the field of artificial intelligence, fuzzy logic and neural networks. This means computers can in the future have the ability to think for themselves and draw their own inferences using programmed information in large databases. Complex processes like understanding speech would appear to be trivial using these fast inferences and would make the software seem highly intelligent. In fact, these databases programmed in a specialised area of study would show a significant expertise greater than humans. Also, improvements in the fourth generation languages now carried features where users did not need any programming knowledge. Little or no coding and computer aided design with graphics provides an easy to use product that can generate new applications.

What does the next generation of languages hold for us? The sixth generation? That is pretty uncertain at the moment. With fast processors, like in fifth generation computers, able to have multiple processors operating in parallel to solve problems simultaneously will probably ignite a whole new type of language being designed. The current trend of the Internet and the World Wide Web could cultivate a whole new breed of radical programmers for the future, now exploring new boundaries with languages like HTML and Java. What happens next is entirely dependent on the future needs of the whole computer and communications industry. Microsoft simply says, "Where do you want to go today?"


References

History of Programming Languages, Richard L. Wexelblat (ed.), Academic Press 1981.

Fourth Generation Languages Volume 1: Principles, by James Martin and Joe Leben, Prentice Hall. 1986
High Level Languages and Their Compilers, Des Watson, Addison-Wesley, 1989

Thursday, January 3, 2008

Here, a genealogy of programming languages is shown. Languages are categorized under the ancestor language with the strongest influence. Of course, any such categorization has a large arbitrary element, since programming languages often incorporate major ideas from multiple sources.

Other lists of programming languages are:

  1. Alphabetical
  2. Categorical
  3. Chronological
  4. Generational