Friday, February 20, 2015

Stack Tutorial

1. why stack implement as linked list

 http://stackoverflow.com/questions/12056842/why-stack-is-implemented-as-a-linked-list-what-is-the-need-why-array-implemen 

2. error: template declaration of ‘typedef’
http://stackoverflow.com/questions/19192122/template-declaration-of-typedef-typename-footbar-bar

3.Sort the elements inside a stack using only push and pop operation is it possible to do it only with one additional stack. 

4.Five boxes, A through E are stacked in alphabetical order with box A on Top.The bottom three boxes are simultaneously removed and placed on Top with their vertical order maintained. if this procedure repeated two more time which box end up in middle.(Microsoft question from careerCup)
Ans. In my analysis, last element will be middle element  in next iteration.
 54 3 21 - (0 iteration 3 middle)
 21 5 43-(1 iteration 5 middle )
.......(In the fifth iteration we will have same number)


Thursday, November 1, 2012

Memory Management in c++

http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps

http://www.codeproject.com/Articles/1669/A-Generational-Garbage-Collector-in-C

 http://www.codeproject.com/Articles/4160/Automatic-Garbage-Collection-in-C-using-Smart-poin

Gcov Code Coverage Tool.

Gcov Coverage:
http://codingfreak.blogspot.in/2009/02/gcov-analyzing-code-produced-with-gcc.html

Linux wiki
http://gcc.gnu.org/onlinedocs/gcc/Gcov.html

Error :undefined symbol: __gcov_merge_add (solution)
http://stackoverflow.com/questions/566472/where-are-the-gcov-symbols

Configure
http://www.codecoffee.com/tipsforlinux/articles/27.html

Lcov
http://ltp.sourceforge.net/coverage/lcov.php

Wednesday, August 1, 2012

C++ Good link

1.C/C++
tricky question
http://www.gowrikumar.com/c/index.html

Static object

Boost Pointer

Thinking in c++ volume two
http://www.ziddu.com/download/12315819/Tic2Vtwo.pdf.html

C programming language

C++ notes
http://www.ziddu.com/download/12315879/Constructor.doc.html

Difference between Encapsulation vs abstraction
http://pragmaticcraftsman.com/2006/05/encapsulation-vs-abstraction/

Learncpp
http://www.learncpp.com/

Parashift Interview Question
http://www.parashift.com/c++-faq-lite/

counter view
http://yosefk.com/c++fqa/inline.html

explicit constructor
http://www.go4expert.com/forums/showthread.php?t=20756

Scoped_ptr related link
http://efreedom.com/Question/1-500656/CPlusPlus-Using-Scoped-Ptr-Member-Variable

http://efreedom.com/Question/1-3625981/Can-Scoped-Ptr-Shared-Ptr-Mixed

Function Pointer 
http://www.newty.de/fpt/index.html

Call Back Function
http://www.codeguru.com/cpp/cpp/cpp_mfc/callbacks/article.php/c10557/Callback-Functions-Tutorial.htm
http://www.tutok.sk/fastgl/callback.html

C++ FAQ
http://www2.research.att.com/~bs/bs_faq2.html

call a C library from C++ code or   call a C++ library from C code? http://developers.sun.com/solaris/articles/mixing.html
http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html

You have an empty class. What does compiler do?
http://www.cplusplus.com/forum/general/8370/

http://forums.devx.com/archive/index.php/t-83525.html

http://stackoverflow.com/questions/4035180/advantages-of-an-empty-class-in-c

http://www.informit.com/articles/article.aspx?p=31473&seqNum=2#


How will you handle factorial for large number
http://answers.google.com/answers/threadview/id/33709.html

Auto_Ptr
http://www.bogotobogo.com/cplusplus/autoptr.php

http://www.cprogramming.com/tutorial/auto_ptr.html


Design Pattern

http://www.coderanch.com/how-to/java/SingletonPattern

http://sourcemaking.com/design_patterns/facade


Pointer
http://www.ericgiguere.com/articles/reading-c-declarations.html 

http://c-pointer.blogspot.com/2009/10/understanding-pointers-in-c.html 

  
http://www.ohloh.net/

Vector inner implemention
http://www.frogatto.com/

Online Course Edx
https://www.edx.org/courses

Online Data Structure Class
https://www.coursera.org/

Binary Tree
http://www.bogotobogo.com/cplusplus/binarytree.php

Practice Program
http://www.codechef.com/problems/MONEY

C Forum
http://codereview.stackexchange.com/questions/tagged/c

Cpp Tips
http://www.cpptips.com/cpptips.html


New Operator
http://stackoverflow.com/questions/679571/when-to-use-new-and-when-not-to-in-c


Project Based URL:
http://projecteuler.net/

Good Course
http://www.stanford.edu/class/cs106b/

Open Source ProjectGoogle code jam(Good for beginner)
http://code.google.com/codejam

Loki
http://loki-lib.sourceforge.net/ 

Boost (last option if you can)

http://www.boost.org/development/

LearnHacker
https://learn.hackerearth.com/
 

Monday, June 4, 2012

Reference Material

C++
1. C++ FAQ
2. Test you skill in C++
3. Test you C skill
4. www.learncpp.com/
5. http://www.cppcoffe.blogspot.in/
6. http://www.geeksforgeeks.org/
7. http://www.tutorialspoint.com/cplusplus
8. Effective C++
9. http://c-pointer.blogspot.in/
10. http://www.careercup.com/



Data Structure
1.Data Structures ( a pseudocode approach with c++)
2.Data Structure and algorithm made easy
3.Cracking coding interview
4. http://www.geeksforgeeks.org/
5. googlegeeks group
6. http://www.careercup.com/
7. IIT delhi video lecture
8. Standford Data Structure Class
9. MIT open course Data Structure
10.Introduction to Algorithum 


DataBase
1.Crack IT interview DataBase
2. http://www.programmerinterview.com/index.php/database-sql/what-is-an-index/
3. http://sql-plsql.blogspot.in/2009/04/sql-interview-questions.html

4. http://www.orafaq.com/faq/what_are_the_difference_between_ddl_dml_and_dcl_commands
5. http://www.1keydata.com/sql/sql.html 
6. http://plsql-tutorial.com/index.htm

7. http://www.sqlcourse.com/index.html (online Compiler)
8. http://www.java2s.com/Tutorial/Oracle/CatalogOracle.htm 
9. http://apex.oracle.com/i/index.html

Unix Reference:
1.Crack IT interview with Unix DataBase
2.http://www.folkstalk.com/2011/11/top-unix-interview-questions-part-1.html 
3.www.thegeekstuff.com/2010/11/50-linux-commands/
4.http://faq.programmerworld.net/networking/unix-interview-questions-answers.html

5.linux.101hacks.com/linux-commands/cut-command-examples/
6.http://www.gointerviews.com/top-50-linux-interview-questions/ 
7.http://www.theunixschool.com/2012/04/linux-interview-questions-part-1.html  


Design Pattern:
1.Head & First Design Pattern (Book)
2.http://sourcemaking.com/design_patterns
3. http://www.javabeat.net/2009/02/design-patterns-interview-questions/
4. http://javarevisited.blogspot.in/2012/06/20-design-pattern-and-software-design.html

System Programming:
1.Galvin -Operating System Concept (7 Edition)
2. Programming with POSIX Threads

Verbal & Reasoning Reference :
1. Wren & Martin English Grammar Book
2. Word Power Made Easy
3.  Barron English Book


Saturday, May 19, 2012

Tree_Program

1.Create_tree

 #include<iostream>  
 #define size 3  
 using namespace std;  
 class node  
 {  
  public:       
  int data;  
  node *left;  
  node *right;  
 };  
 class tree  
 {  
  public:  
  node *root;  
  tree();  
  void create_tree(int data);  
  void inorder_traverse(node *s);  
  void print_data();  
 };  
 tree::tree()  
 {  
      root=NULL;  
 }  
 void tree::create_tree(int data)  
 {  
   node *temp = new node;  
   temp->data = data;  
   temp->left =NULL;  
   temp->right =NULL;  
   if(root ==NULL)  
        root =temp;  
   else  
   {  
       node *p =root;  
       node *trail =NULL;  
      while(p!=NULL)  
       {   
         root=p;  
        if(p->data > data)  
         {  
             p =p->left;  
         }  
         else  
           {  
              p=p->right;  
           }  
       }  
       if(root->data > data)  
       {  
         root->left =temp;  
       }  
       else  
       {        
         root->right = temp;  
       }  
   }  
 }  
 void tree::inorder_traverse(node *s)  
 {   
    if(s)       
    {  
         inorder_traverse(s->left);  
         cout<<s->data<<"  ";  
         inorder_traverse(s->right);  
    }  
 }  
 void tree::print_data()  
 {  
      inorder_traverse(root);  
 }  
 int main()  
 {  
  int data;  
  tree t;  
  for(int i=0;i<3;i++)  
  {  
      cout<<"Enter data";  
       cin>>data;  
       t.create_tree(data);  
  }  
  cout<<"inorder: traverse :"<<endl;  
  t.print_data();  
  return 0;  
 }  
Create_tree with level_order_traversal
 #include<iostream>  
 using namespace std;  
 class node  
 {   
   public:       
   int data;  
   node *left;  
   node *right;  
 };  
 class tree  
 {  
   public:  
   node *root;  
   tree();  
   void create_tree(int data);  
   void print_tree();  
   void tree_level_order_traverse(node *root);  
 };  
 tree::tree()  
 {  
   root =NULL;  
 }  
 void tree::create_tree(int data)  
 {  
   node *temp = new node;  
   temp->data = data;  
   temp->left = NULL;  
   temp->right =NULL;  
   if(root==NULL)  
     root=temp;  
   else  
   {  
    node *p=root;  
    node *trail =NULL;  
    while(p!=NULL)  
    {  
      trail=p;  
       if(p->data > data)  
         p=p->left;  
      else  
         p=p->right;  
    }  
    if(trail->data > data)  
       trail->left = temp;  
    else  
       trail->right = temp;  
   }  
 }  
 void tree::tree_level_order_traverse(node *root)  
 {  
   node *data_store[10] = { (node*)0} ;  
   int size=0;  
   int follow_variable=0;  
   while(root)    
   {  
    cout<<root->data<<" :";  
    if(root->left)  
      data_store[size++]=root->left;  
    if(root->right)  
    data_store[size++]=root->right;  
     root=data_store[follow_variable++];  
   }  
 }  
 void tree::print_tree()  
 {  
      tree_level_order_traverse(root);  
 }  
 int main()  
 {  
   tree t;       
   for(int i=0;i<8;i++)  
       t.create_tree(i);  
   cout<<"create successful"<<endl;  
   t.print_tree();   
   return 0;  
 }  
Tree_header_file
 #include<iostream>  
 using namespace std;  
 class node  
 {  
   public:  
   int data;  
   node *left;  
   node *right;  
 }  
 class tree  
 {  
  public:  
  node *root;  
  tree();  
  void create_tree(int data);  
 };  
 tree::tree()  
 {  
  root =NULL;  
 }  
 void tree::create_tree(int data)  
 {  
      node *temp= new node;  
      temp->data = data;  
      temp->left =NULL;  
      temp->right =NULL;  
      node *p=root;  
      while(p!=NULL)  
      {  
       root =p;  
        if(p->data > data)  
          p=p->left;  
        else  
          p=p->right;  
      }  
      if(root->data > data)  
           root->left = temp;  
      else  
           root->right = temp;  
 }  

Find_Max_Element_without_Recursion

 template<class T>  
 int tree<T>::find_max_element(node *root)  
 {  
  node *queue[10]= {(node*)0};  
  node *p=root;  
  int data;  
  int size=0;  
  int follow_variable =0;  
  int max=0;  
  while(p!=NULL)  
  {  
   if(max < p->data)  
       max = p->data;         
   if(p->left)  
        queue[size++] = p->left;  
   if(p->right)  
        queue[size++] = p->right;  
    p = queue[follow_variable++];  
  }    
    cout<<"the max data :"<<max<<endl;  
  return max;  

Find_Max_Element_with_Recursion

 int tree<T>::find_max_element(node *root)  
 {  
  int root_value;  
  int left;  
  int right;  
  int max =0;  
  if(root!=NULL)  
  {  
    root_value = root->data;  
    left = find_max_element(root->left);  
    right = find_max_element(root->right);  
    if(right > left)  
         max=right;  
    else  
         max=left;  
  if(root_value > max)  
       max=root_value;  
  }  
  return max;  
Reverse_level_order_traverse
 void tree::reverse_level_order(node *root)  
 {  
   node *temp_storage[10]= {(node *)0};  
   int size =0;  
   int flow_variable =0;  
   while(root)  
   {  
    cout<<" "<<root->data;  
    if(root->right)  
    {  
          temp_storage[size++]=root->right;  
    }  
    if(root->left)  
    {  
         temp_storage[size++] = root->left;  
    }   
    root=temp_storage[flow_variable++];  
  }  
 }  

Thursday, April 14, 2011

C++ Program list 2

1. reverse string program
2. Find out the day of a given  date in particular year
3. Print Pascal Triangle
4. To remove same word from the string 
5. we need to find any particular string and replace with convert string
  
reverse string program

/*


* This file contain reverse string program

* */

#include<iostream>
using namespace std;

template<class T>
class str
{
public:
char *p;
str();
void reverse();
void show();
};

template<class T>
str<T>::str()
{
p=new char[20];
cout<<" enter the string"<<endl;
cin>>p;
}

template<class T>
void str<T>::reverse()
{
char temp;
int count=0;
while(p[count]!='\0')
count++;
for(int i=0;i<count/2;i++)
{
temp =p[count-i-1];
p[count-i-1]=p[i];
p[i]=temp;
}
}
template<class T>
void str<T>::show()
{
cout<<p;
}
int main()
{
str<char>s;
s.reverse();
s.show();
return 0;
}

Find out the day of a given  date in particular year
#include<iostream>

#include<string>
#define BASEYEAR 100

using namespace std;
template<class T>
class leap
{
public:
leap();
int leap_year(int year);
long today;
void get_day();
};
template<class T>
leap<T>::leap()
{
today =0;
}
template<class T>
int leap<T>::leap_year(int year)
{
if((year%4==0 && year%100!=0)
(year%400==0))
return 1;

else
return 0;
}

template<class T>
void leap<T>::get_day()
{
int date;
cout<<" Enter date"<<endl;
cin>>date;
int month;
cout<<" Enter month"<<endl;
cin>>month;
int year;
cout<<"Enter year"<<endl;
cin>>year;
string day[]={"sunday","monday","tuesday","wednesday","thursday","friday","saturday"};
int day_of_month[]= {31,28,31,30,31,30,31,31,30,31,30,30};

for(int i=BASEYEAR;i<year-1;i++)
{
if(leap_year(i))
today=today+366;
else
today=today+365;
}

if(leap_year(year))
day_of_month[1]=29;

for(int i=0;i<month-1;i++)
today =today+day_of_month[i];

today+=date;
cout<<"today--->"<<today<<endl;
int wday =today%7;

cout<<" The day is ---->"<<day[wday];
}

Print Pascal Triangle


* Print Pascal Triangle


*

* 1

* 1 1

* 1 2 1

* 1 3 3 1

*

* */

#include<iostream>
using namespace std;
int main()
{
int a[20];
a[0]=1;
int k;
for(int i=0;i<10;i++)
{
int y=0;
for(int j=10;j>=i;j--)
cout<<" ";

for(k=0;k<=i;k++)
{
int p=y+a[k];
y=a[k];
a[k]=p;
cout<<p<<" ";
}

a[k]=0;
cout<<endl;
}
return 0;
}

To remove same word from the string
 
#include<iostream>

using namespace std;

char* word(char *p)
{
char *filter =new char[20];
int count=0;
int i=0;
while(p[count]!='\0')
count++;

for(i=0;p[i]!='\0';i++)
{
for(int j=i+1;p[j]!='\0';j++)
{
if(p[i]==p[j])
{
p[j]=p[j+1];
i=0;
cout<<" inside if block"<<endl;
}
}
}
p[i]='\0';
return p;
}
int main()
{
char *w=new char[20];
cout<<"enter the word"<<endl;
cin>>w;
cout<<word(w);
return 0;
}

/*

* we need to find any particular string and replace with convert string
* Ex.
* hi ss
* abhishek --->ab(hi)shek----->abssshek
*
* */

#include<iostream>

using namespace std;
char *change(char *p,char *find,char *replace)
{
int find_length=0;
while(find[find_length]!='\0')
find_length++;

int replace_length=0;
while(replace[replace_length]!='\0')
replace_length++;

for(int i=0;p[i]!='\0';i++)
{
for(int j=0;p[j]!='\0';j++)
{
if(p[i]==p[j])
{
int k=i;
int j=0;
while(find[j]!='\0')
{
if(p[k++]!=find[j])
break;
j++;
}
if(j==find_length)
{
int re =0;
//if the replace length are same
if(find_length==replace_length)
{
while(replace[re]!='\0')
{
p[i++]=replace[re++];
cout<<p[i]<<endl;
cout<<i<<endl;
}
}
//if the replace lenth are less
if(find_length>replace_length)
{
while(replace[re]!='\0')
p[i++]=replace[re++];

while(p[i]!='\0')
{
p[i]=p[i+2];
i++;
}
}
}
}
}
}
return p;
}
int main()
{
char *p=new char[40];
cout<<"Enter the string"<<endl;
cin>>p;
char *find =new char[40];

cout<<" Enter the find string"<<endl;
cin>>find;
char *replace =new char[40];

cout<<"Enter the replace"<<endl;
cin>>replace;


cout<<" final string "<<change(p,find,replace)<<endl;
return 0;
}





 
 



int main()

{

leap<int> l;

l.get_day();



return 0;

}

Sunday, October 10, 2010

Inter Process communication

This is collectively notes  from diffirent internet material nd put some of my analysis with
diffirent topic.

The purpose of this article is to get the readers familiar with the different mechanisms that are available for communicating between two or more processes. This may also serve as a tutorial for the novice programmer. There might be several good tutorials on this subject

Introduction

Inter-Process Communication, which in short is known as IPC, deals mainly with the techniques and mechanisms that facilitate communication between processes. Now, why do we need special separate mechanisms or techniques for communicating between processes? Why isn't it possible to have information shared between two processes without using such special mechanisms?
Let us start from something primitive. Imagine you have two glasses completely filled with water. One glass contains hot water and the other contains cold water. What can you do to make the temperature of water in both the glasses equal? The simplest answer will be to mix the water from both the glasses in a glass with much bigger capacity. Once water is mixed, the temperature becomes equal after some time. If one can remember, this will be framed as a problem with some numerical data in a High-School Physics examination. If we go by principles, then the phenomenon here is conduction. If we go by our topic of IPC, then we can say that since the two glasses were full, we had to use another glass with a larger capacity to mix the contents in order to balance their heat energy.
Have you ever wondered about the communication medium used in telephones? What about the blood transporting system in the human body which communicates blood to different parts of the body? What about my fingers which are typing this document? My brain is doing so many things at a time. How is it directing one of my fingers to hit one key and some other finger to hit another key? How is it synchronizing the typing work that is done by both my hands? How is it also directing me to type the letters of a word that are actually coming to my mind?
Don't worry. I am not going to give a class in Biology. But it would be good if one can imagine a few more situations where we are using inter-process communication, though not necessarily in the human body or in a computer program.
So, where are we now? We know that some medium or other is required for communication between different processes. Similarly, when it comes to computer programs, we need some mechanism or medium for communication. Primarily, processes can use the available memory to communicate with each other. But then, the memory is completely managed by the operating system. A process will be allotted some part of the available memory for execution. Then each process will have its own unique user space. In no way will the memory allotted for one process overlap with the memory allotted for another process. Imagine what would happen otherwise!
So, now the question - how do different processes with unique address space communicate with each other? The operating system's kernel, which has access to all the memory available, will act as the communication channel. Similar to our earlier example, where the glass with hot water is one process address space, the glass with cold water is another, and the glass with the larger capacity is the kernel address space, so that we pour both hot water and cold water into the glass with larger capacity.
What next? There are different IPC mechanisms which come into use based on the different requirements. In terms of our water glasses, we can determine the specifics of both pouring the water into the larger glass and how it will be used after beign poured.

Basic IPC

OK, enough of glasses and water. The IPC mechanisms can be classified into the following categories as given below:
  1. pipes
  2. fifos
  3. shared memory
  4. mapped memory
  5. message queues
  6. sockets

Pipes

Pipes were evolved in the most primitive forms of the Unix operating system. They provide unidirectional flow of communication between processes within the same system. In other words, they are half-duplex, that is, data flows in only one direction. A pipe is created by invoking the pipe system call, which creates a pair of file descriptors. These descriptors point to a pipe inode and the file descriptors are returned through the filedes argument. In the file descriptor pair, filedes[0] is used for reading whereas filedes[1] is used for writing.
Let me explain a scenario where we can use the pipe system call: consider a keyboard-reader program which simply exits after any alpha-numeric character is pressed on the keyboard. We will create two processes; one of them will read characters from the keyboard, and the other will continuously check for alpha-numeric characters. Let us see how the filedes returned by pipe can be of use in this scenario: (Text version: kbdread-pipe.c.txt)
/***** KEYBOARD HIT PROGRAM *****/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <pthread.h>
#include <ctype.h>

int filedes[2];

void *read_char()
{
 char c;
 printf("Entering routine to read character.........\n");
 while(1) {
  /* Get a character in 'c' except '\n'. */
  c = getchar();
  if(c == '\n')
   c = getchar();
  write(filedes[1], &c, 1);
  if(isalnum(c)) {
   sleep(2);
   exit(1);
  }
 }
}

void *check_hit()
{
 char c;
 printf("Entering routine to check hit.........\n");
 while(1) {
  read(filedes[0], &c, 1);
  if(isalnum(c)) {
   printf("The key hit is %c\n", c);
   exit(1);
  } else {
   printf("key hit is %c\n", c);
  }
 }
}
  
int main()
{
 int i;
 pthread_t tid1, tid2;
 pipe(filedes);
 /* Create thread for reading characters. */
 i = pthread_create(&tid1, NULL, read_char, NULL); 
 
 
/* Create thread for checking hitting of any keyboard key. */
 i = pthread_create(&tid2, NULL, check_hit, NULL);
 if(i == 0) while(1);
 return 0;
}

What is mean of  pthread_create(&tid1, NULL, read_char, NULL);

Just FYI:
int pthread_create(pthread_t *restrict thread,
const pthread_attr_t *restrict attr,  void *(*start_routine)(void*), void *restrict arg);



Description
The pthread_create() function shall create a new thread, with attributes specified by attr
within a process. If attr is NULL, the default attributes shall be used.
If the attributes specified by attr are modified later,the thread's attributes shall not be affected.


Upon successful completion, pthread_create() shall store the ID of the created thread in the location referenced by thread.


The thread is created executing start_routine with arg as its sole argument.

If the start_routine returns, the effect shall be as if there was an implicit call to pthread_exit()  
using the return value of start_routine as the exit status.

Note that the thread in which main() was originally invoked differs from this.When it returns from main(), 
 the effect shall be as if there was an implicit call to exit() using the return value of main() as the exit status. 
The signal state of the new thread shall be initialized as follows:
The signal mask shall be inherited from the creating thread
The set of signals pending for the new thread shall be empty
The alternate stack shall not be inherited
The floating-point environment shall be inherited from the creating thread.

If pthread_create() fails, no new thread is created and the contents of the location referenced by thread are undefined

Save and compile the program as cc filename.c -lpthread. Run the program and check the results. Try hitting a different key every time.
The read_char function simply reads a character other than '\n' from the keyboard and writes it to filedes[1]. We have the thread check_hit, which continuously checks for the character in filedes[0]. If the character in filedes[0] is an alpha-numeric character, then the character is printed and the program terminates.
One major feature of pipe is that the data flowing through the communication medium is transient, that is, data once read from the read descriptor cannot be read again. Also, if we write data continuously into the write descriptor, then we will be able to read the data only in the order in which the data was written. One can experiment with that by doing successive writes or reads to the respective descriptors.
So, what happens when the pipe system call is invoked? A good look at the manual entry for pipe suggests that it creates a pair of file descriptors. This suggests that the kernel implements pipe within the file system. However, pipe does not actually exist as such - so when the call is made, the kernel allocates free inodes and creates a pair of file descriptors as well as the corresponding entries in the file table which the kernel uses. Hence, the kernel enables the user to use the normal file operations like read, write, etc., which the user does through the file descriptors. The kernel makes sure that one of the descriptors is for reading and another one if for writing.

FIFOs
FIFOs (first in, first out) are similar to the working of pipes. FIFOs also provide half-duplex flow of data just like pipes. The difference between fifos and pipes is that the former is identified in the file system with a name, while the latter is not. That is, fifos are named pipes.
Fifos are identified by an access point which is a file within the file system, whereas pipes are identified by an access point which is simply an allotted inode.

*inode
The inode (index node) is a fundamental concept in the Linux and UNIX filesystem. Each object in the filesystem is represented by an inode.
But what are the objects? Let us try to understand it in simple words.
 Each and every file under Linux (and UNIX) has following attributes:
  •  File type (executable, block special etc)
  • Permissions (read, write etc)
  • Owner
  • Group
  • File Size
  • File access, change and modification time (remember UNIX or Linux never stores file creation time, this is favorite question asked in UNIX/Linux sys admin job interview)
  • File deletion time
  • Number of links (soft/hard)
  • Access Control List (ACLs)
All the above information stored in an inode.
In short the inode identifies the file and its attributes (as above) . Each inode is identified by a unique inode number within the file system. Inode is also know as index number.

inode definition 

An inode is a data structure on a traditional Unix-style file system such as UFS or ext3. An inode stores basic information about a regular file, directory, or other file system object.

How do I see file inode number?

You can use ls -i command to see inode number of file
$ ls -i /etc/passwd
Sample Output
32820 /etc/passwd

You can also use stat command to find out inode number and its attribute:
$ stat /etc/passwdOutput:
 
File: `/etc/passwd'
  Size: 320347          Blocks: 640        IO Block: 4096   Regular File
Device: 6801h/26625d    Inode: 196570      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-10-16 10:49:12.000000000 -0700
Modify: 2010-10-16 10:43:27.000000000 -0700
Change: 2010-10-16 10:43:27.000000000 -0700 
 
Inode application Many commands used by system administrators in UNIX / Linux operating systems often give inode numbers to designate a file. Let us see he practical application of inode number. Type the following commands:
$ cd /tmp
$ touch \"la*
$ ls -l

Now try to remove file "la*
You can't, to remove files having created with control characters or characters which are unable to be input on a keyboard or special character such as ?, * ^ etc. You have to use inode number to remove file.
 
Difference between find & grep

Grep - grep search the pattern in the file

find -search for the file in the director

Another major difference between fifos and pipes is that fifos last throughout the life-cycle of the system, while pipes last only during the life-cycle of the process in which they were created. To make it more clear, fifos exist beyond the life of the process. Since they are identified by the file system, they remain in the hierarchy until explicitly removed using unlink, but pipes are inherited only by related processes, that is, processes which are descendants of a single process.

Let us see how a fifo can be used to detect a keypress, just as we did with pipes. The same program where we previously used a pipe can be modified and implemented using a fifo.
(Text version: write-fifo.c.txt)
/***** PROGRAM THAT READS ANY KEY HIT OF THE KEYBOARD*****/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <pthread.h>
#include <ctype.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>

extern int errno;

void *read_char()
{
 char c;
 int fd;
 printf("Entering routine to read character.........\n");
 while(1) {
  c = getchar();
  fd = open("fifo", O_WRONLY);
  if(c == '\n')
   c = getchar();
  write(fd, &c, 1);
  if(isalnum(c)) {
   exit(1);
  }
  close(fd);
 }
}

int main()
{
 int i;
 pthread_t tid1;
 i = mkfifo("fifo", 0666);
 if(i < 0) {
  printf("Problems creating the fifo\n");
  if(errno == EEXIST) {
   printf("fifo already exists\n");
  }
  printf("errno is set as %d\n", errno);
 }
 i = pthread_create(&tid1, NULL, read_char, NULL);
 if(i == 0) while(1);
 return 0;
}
Compile this program using cc -o write_fifo filename.c. This program reads characters (keypresses), and writes them into the special file fifo. First the program creates a fifo with read-write permissions using the function mkfifo. See the manual page for the same. If the fifo exists, then mkfifo will return the corresponding error, which is set in errno. The thread read_char continuously tries to read characters from the keyboard.
Note that the fifo is opened with the O_WRONLY (write only) flag .
Once it reads a character other than '\n', it writes the same into the write end of the fifo. The program that detects it is given below: (text version detect_hit.c.txt):
/***** KEYBOARD HIT PROGRAM *****/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <pthread.h>
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
#include <sys/stat.h>

extern int errno;

void *check_hit()
{
 char c;
 int fd;
 int i;
 printf("Entering routine to check hit.........\n");
 while(1) {
  fd = open("fifo", O_RDONLY);
  if(fd < 0) {
   printf("Error opening in fifo\n");
   printf("errno is %d\n", errno);
   continue;
  }
  i = read(fd, &c, 1);
  if(i < 0) {
   printf("Error reading fifo\n");
   printf("errno is %d\n", errno);
  }
  if(isalnum(c)) {
   printf("The key hit is %c\n", c);
   exit(1);
  } else {
   printf("key hit is %c\n", c);
  }
 }
}
  
int main()
{
 int i;
 i = mkfifo("fifo", 0666);
 if(i < 0) {
  printf("Problems creating the fifo\n");
  if(errno == EEXIST) {
   printf("fifo already exists\n");
  }
  printf("errno is set as %d\n", errno);
 }
 pthread_t tid2;
 i = pthread_create(&tid2, NULL, check_hit, NULL);
 if(i == 0) while(1);
 return 0;
}
Here, again, it first tries to create a fifo which is created if it does not exist. We then have the thread check_hit which tries to read characters from the fifo. If the read character is alphanumeric, the program terminates; otherwise the thread continues reading characters from the fifo.
Here, the fifo is opened with the flag O_RDONLY.
Compile this program with cc -o detect_hit filename.c. Now run the two executables in separate terminals, but in the same working directory. Irrespective of the order in which you run, look for the message fifo already exists on the console. The first program (either of the two) that you run will not give any error message for creation of the fifo. The second program that you run will definitely give you the error for creation of the fifo. In the terminal where you run write_fifo, give input to standard output from your keyboard. You will get the message regarding the key hit on the keyboard on the terminal running the executable detect_hit. Analyze the working of the two programs by hitting several keys.
I have used two different programs for exhibiting the usage of fifos. This can be done within a single program by forking the routines which are called in the two program as threads. But I did this to show that unlike pipes, fifos can be used for communication between unrelated processes.
Now try running the program again. You will get the message that the fifo already exists even when you first run either of the two programs. This shows that fifos are persistent as long as the system lives. That is, the fifos will have to be removed manually - otherwise they will be permanently recognized by the file system. This is unlike pipes which are inherited as long as the process that created the pipe is running. Once this process dies, the kernel also removes the identifiers (file descriptors) for the pipe from the the file tables.

The usage is rather simple and the main advantage is that there is no need for any synchronization mechanism for accesses to the fifo. There are certain disadvantages: they can only be used for communication between processes running on the same host machine. Let us explore other IPC mechanisms to see what have they in store.

Pipes vs Fifo

A pipe is a mechanism for interprocess communication; data written to the pipe by one process can be read by another process. The data is handled in a first-in, first-out (FIFO) order. 

The pipe has no name it is created for one use and both ends must be inherited from the single process which created the pipe.

A FIFO special file is similar to a pipe, but instead of being an anonymous, temporary connection,

a FIFO has a name or names like any other file. Processes open the FIFO by name in order to communicate through it. 

A pipe or FIFO has to be open at both ends simultaneously. If you read from a pipe or FIFO file that doesn't have any processes writing to it (perhaps because they have all closed the file, or exited), 

the read returns end-of-file. Writing to a pipe or FIFO that doesn't have a reading process is treated as an error condition; it generates a SIGPIPE signal, and fails with error code EPIPE if the signal is handled or blocked. 

The only difference between pipes and FIFOs is the manner in which they are created and opened. Once these tasks have been accomplished, I/O on pipes and FIFOs has exactly the same semantics.

Shared Memory

Shared Memory is one of the three kinds of System V IPC mechanism which enables different processes to communicate with each other as if these processes shared the virtual address space; hence, any process sharing the memory region can read or write to it. One can imagine some part of memory being set aside for use by different processes.
The System V IPC describes the use of the shared memory mechanism as consisting of four steps. Taken in order, they are:
  • Fetching an identifier for the shared memory area - shmget (shared memory get)
  • Using the identifier to get the shared memory address - shmat (shared memory attach),
  • Detaching the shared memory area after use - shmdt (shared memory detach) and
  • Finally using the address to control accesses, permissions, receive information and destroy the shared memory area - shmctl (shared memory control).
Let us examine the workings of the above system calls. Recall the keyboard hit program; we shall, once again, see another version of it, this time using the system calls associated with the shared memory mechanism.
The code given below creates a shared memory area and stores the information of any key hit on the keyboard. Let us see the code first: (text version: write-shm.c.txt)
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <errno.h>
#include <string.h>
#include <ctype.h>

extern int errno;

#define SIZE 1

char *read_key;
int shmid;

int shared_init()
{
 if((shmid = shmget(9999, SIZE, IPC_CREAT | 0666)) < 0) {
  printf("Error in shmget. errno is: %d\n", errno);
  return -1;
 }
 if((read_key = shmat(shmid, NULL, 0)) < 0) {
  printf("Error in shm attach. errno is: %d\n", errno);
  return -1;
 }
 return 0;
}

void read_char()
{
 char c;
 while(1) {
  c = getchar();
  if(c == '\n') {
   c = getchar();
  }
  strncpy(read_key, &c, SIZE);
  printf("read_key now is %s\n", read_key);
  if(isalnum(*read_key)) {
   shmdt(read_key);
   shmctl(shmid, IPC_RMID, NULL);
   exit(1);
  }
 }
}

int main()
{
 if(shared_init() < 0) {
  printf("Problems with shared memory\n");
  exit(1);
 }
 read_char();
 return 0;
}
Here we have a shared memory variable named read_key. The program first initializes the shared memory area read_key. This is done by generating a shared memory identifier shmid using the system call shmget. In the context of the program, the first parameter for shmget is 9999, which is the key. This key is used to allocate a shared memory segment. The second parameter, SIZE (defined as a macro with the value 1), suggests that the shared memory segment will hold only one of the type of the shared memory variable, that is, only 1 character. The IPC_CREAT flag (third parameter) suggests that a new shared memory segment has to be created, with read-write permissions (IPC_CREAT logically OR ed with 0666). This will return a valid shared memory segment identifier on successful allocation. The identifier will be stored in shmid. If shared memory segment allocation fails, then -1 is returned and the errno is set appropriately.
The key which is used to get a shared memory segment can be generated randomly using the built-in function ftok to get a unique key. Refer to the manual page for the usage.
Once the segment identifier is obtained, we have to attach the shared memory segment to some address. This is done with the shmat system call. This uses the segment identifier shmid as the first parameter. The second parameter is the address of the shared memory segment; when this is given as NULL (as in this program), the kernel will choose a suitable address. The third parameter is the flag specification which can be set if required or left as zero (see man page of shmdt for details). On success the shared memory segment is attached to read_key, otherwise -1 is returned along with the appropriate setting of the errno.
If either shmget or shmat fails, the process terminates. On success from both system calls, we proceed by invoking the read_char function, which reads keyboard inputs other than '\n' ("Enter" key) and copies them to read_key in the shared memory. If the keyboard input is an alphanumeric character, the program stops reading inputs from the keyboard and the process terminates.
We have another program running separately (it does not have to be in the same working directory) in the local system, which tries to read the data written in the shared memory area. The code is given below: (text version: read-shm.c.txt)
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <errno.h>
#include <string.h>
#include <ctype.h>

extern int errno;

#define SIZE 1

char *detect_key;
int shmid;

int shared_init()
{
 if((shmid = shmget(9999, SIZE, 0444)) < 0) {
  printf("Error in shmget. errno is: %d\n", errno);
  return -1;
 }
 if((detect_key = shmat(shmid, NULL, SHM_RDONLY)) < 0) {
  printf("Error in shm attach. errno is: %d\n", errno);
  return -1;
 }
// detect_key = NULL;
 return 0;
}

void detect_hit()
{
 char c;
 c = *detect_key;
 while(1) {
  if(c != *detect_key) {
   if(isalnum(detect_key[0])) {
    printf("detect_key is %s\n", detect_key);
    shmdt(detect_key);
    shmctl(shmid, IPC_RMID, NULL);
    exit(1);
   } else {
    printf("detect_key is %s\n", detect_key);
   }
   c = *detect_key;
  }
 }
}

int main()
{
 if(shared_init() < 0) {
  printf("Problems with shared memory\n");
  exit(1);
 }
 detect_hit();
 return 0;
}
Here, again, we have a shared memory initialization routine, which in fact does not create a new shared memory segment, but rather tries to get access to the existing shared memory segment. Compared to the previous program, the absence of IPC_CREAT flag suggests that we do not have to create a new shared memory segment. Instead, we simply have to get the corresponding segment identifier which can be used to attach the existing shared memory segment to some address. The mode 0444 restricts access to the shared memory segment to 'read only'. If no shared memory segment with key 9999 exists, we will get an error, which will be returned in errno. Once we get a valid identifier, we attach the shared memory segment to an address. While attaching, we use the flag SHM_RDONLY which specifies that the shared memory segment will be available only for reading.
Next, we have the function detect_hit, which checks whether the pressed key was an alphanumeric character. The first program obviously has to run first; otherwise, the second program will show errors during the shared memory initialization, since it would be trying to get the identifier for a non-existent shared memory segment.
The example shown here doesn't require any synchronization of access to the shared memory segment. That is because only one program writes into the shared memory and only one program reads from the shared memory area. But again, there is a problem here. What if the detection program (second one) is started long after some user has started hitting the keys (running the first program)? We will not be able to track the previously hit keys. The solution for this is left as an exercise to the readers.
The entry in /proc/sysvipc/shm gives a list of shared mermory in use. Readers can compare the entries before running, during running and after running the programs. Try to interpret the entry in /proc/sysvipc/shm.
Once the two programs identify an alphanumeric character, they will terminate. As part of that process, the shared memory area is detached by using the system call shmdt. In fact, upon exiting the detaching is done automatically. But the shared memory segment is not destroyed. This has to be done by invoking the system call shmctl, which takes the identifier for the shared memory area as an argument, as well as the command IPC_RMID, which marks the shared memory segment as destroyed. This has to be done, otherwise the shared memory segment will persist in memory or in the swap space.
At this point, observation of the entries in /proc/sysvipc/shm can be very useful. If the shared memory segment is not destroyed, the entries will reflect this. Try this by running the program without shmctl.
This is the fastest IPC mechanism in the System V IPC services. However, the System V shared memory mechanism does not have any kind of scheme to ensure that one sees consistent data in the shared memory region. That is, a process can read a shared memory area at the same time another process is writing to it. The programmer can then come across inconsistent data while executing the programs. This suggests that accesses to the shared memory region have to be mutually exclusive; this is achieved via the use of the semaphore mechanism. We can make the semaphore access the memory region to lock it and then release the semaphore when done.
The shared memory mechanism can be used when the processes access the shared memory areas at different times. One may wonder why we can't make the first process store the data in some file and make another process read the data from the file. But, reading data from a file involves things like:
  • execution of system calls like open, read and close.
  • accessing the secondary storage device (generally hard disk) which involves I/O operations.
These things are not significant if we have a small amount of data to be read. But when we have large amounts of data stored in a file, then the load of the two activities mentioned above increases significantly and there is a considerable amount of reduction in the performance of the "reading program". This, again, has a solution, which is the use of memory mapping - something I'll discuss at another time.

Friday, September 24, 2010

C++ Professional Schdule

I am creating schedule for c++ professional which is mainly applicable to me :) but can be helpful for others. As a C++/Python Programmer, we should have good knowledge  in below topic.


C++                        http://cppcoffe.blogspot.in/2012/07/blog-post.html


Data Base              http://cppcoffe.blogspot.in/p/data-base.html
        
Python                   http://cppcoffe.blogspot.in/p/python-skill-and-interview.html



 STL              http://cppcoffe.blogspot.in/p/stl-url.html 

Coding website                              

Monday, September 20, 2010

Thread programming in c++

Basic question with short description

What is semaphore in operating system
Semaphores are devices used to help with synchronization. If multiple processes share a common resource, they need a way to be able to use that resource without disrupting each other. You want each process to be able to read from and write to that resource uninterrupted.

A semaphore will either allow or disallow access to the resource, depending on how it is set up. One example setup would be a semaphore which allowed any number of processes to read from the resource, but only one could ever be in the process of writing to that resource at a time



What is thread
a thread of execution is the smallest unit of processing that can be scheduled by an operating system



Designing a Thread Class in C++ 

To understand threads one must think of several programs running at once.
Imagine further that all these programs have access to the same set of global variables and function calls.

Each of these programs would represent a thread of execution and is thus called a thread.
The important differentiation is that each thread does not have to wait for any other thread to proceed.

All the threads proceed simultaneously.
To use a metaphor, they are like runners in a race, no runner waits for another runner.
They all proceed at their own rate

Why use threads you might ask

Well threads can often improve the performance of an application and they do not incur significant overhead to implement.  

They effectively give good bang for a buck.
Imagine an image server program that must service requests for images .
The program gets a request for an image from another program.It must then retrieve the image from a database and send it to the program that requested it.

if the server were implemented in a single threaded approach.
only one program could request at a time.When it was busy retrieving an image and sending it to a requestor, it could not service other requests.

Of course one could still implement such a system without using threads. It would be a challenge though Using threads, one can very naturally design a system to handle multiple requests.


A simple approach would be to create a thread for each request received.


The main thread would create this thread upon receipt of a request.
After retrieving the image, the thread would terminate itself
This would provide a smooth system that would continue to service requests even though it was busy serviceing other requests at the same time

Basic Approach

The create a thread, you must specify a function that will become the entry point for the thread. At the operating system level, this is a normal functionWe have to do a few tricks to wrap a C++ class around.it because the entry function cannot be a normal member function of a class.

 However, it can be a static member function of a class.This is what we will use as the entry point. There is a gotcha here though.

Static member functions do not have access to the this pointer of a C++ object.They can only access static data.  Thread entry point functions take a void * as a parameter so that the caller can typecast any data and pass in to the thread.We will use this to pass this to the static function. The static function will then typecast the void * and use it to call a non static member function. 

The Implementation 

It should be mentioned that we are going to discuss a thread class with limited functionality. It is possible to do more with threads than this class will allow.

class Thread
{
   public:
      Thread();
      int Start(void * arg);
   protected:
      int Run(void * arg);
      static void * EntryPoint(void*);
      virtual void Setup();
      virtual void Execute(void*);
      void * Arg() const {return Arg_;}
      void Arg(void* a){Arg_ = a;}
   private:
      THREADID ThreadId_;
      void * Arg_;

};

Thread::Thread() {}

int Thread::Start(void * arg)
{
   Arg(arg); // store user data
   int code = thread_create(Thread::EntryPoint, this, & ThreadId_);
   return code;
}

int Thread::Run(void * arg)
{
   Setup();
   Execute( arg );
}

/*static */
void * Thread::EntryPoint(void * pthis)
{
   Thread * pt = (Thread*)pthis;
   pthis->Run( Arg() );
}

virtual void Thread::Setup()
{
        // Do any setup here
}

virtual void Thread::Execute(void* arg)
{
        // Your code goes here
}

It is important to understand that we are wrapping a C++ object 
around a thread. Each object will provide an interface to a single thread. The
thread and the object are not the same.  
The object can exist without a thread. In this implementation, the
thread does not actually exist until the Start function is called 
 
Notice that we store the user argument in the class.
This is necessary because we need a place to store it temporarily until the 
thread is started. The operating system thread call allows us to pass 
an argument but we have used it to pass the this pointer. So we store 
the real user argument in the class itself and when the execute function 
is called it can get access to the argument.
 

Thread(); This is the constructor. 

int Start(void * arg) : This function provides the means to create the thread and start it going. The argument arg provides a way for user data to be passed into the thread.

Start() :creates the thread by calling the operating system thread creation function.  

int Run(void * arg) :This is a protected function that should never be tampered with

static void * EntryPoint(void * pthis) : This function serves as the entry point to the thread. It simply casts pthis to Thread * and 

virtual void Setup() : This function is called after the thread has been created but before Execute() is called. If you override this function, remember to call the parent class Execute()

virtual void Execute(void *) :You must override this function to provide your own functionality. 

To use the thread class, you derive a new class. you override the Execute() function where you provide your own functionality. You may override the Setup() function to do any start up duties before Execute is called. If you override Setup(), remember to call the parent class Setup(). 

Multi Thread Programming in c++

This tutorial focusses on multi-threading rather than multi-processing, though most of the concepts discussed are common to all concurrency

Multi-threading refers to an application with multiple threads running within a process
multi-processing refers to an application organised across multiple OS-level processes

A thread is a stream of instructions within a process.Each thread has its own instruction pointer, set of registers and stack memory.

The virtual address space is process specific or common to all threads within a process.
So, data on the heap can be readily accessed by all threads, for good or ill.

Multi-threading is a more "light weight" form of concurrency: there is less context per thread than per process.
As a result thread lifetime, context switching and synchronisation costs are lower.
The shared address space (noted above) means data sharing requires no extra work.


Multi-processing has the opposite benefits. Since processes are insulated from each other by the OS, an error in one process cannot bring down another process

Contrast this with multi-threading, in which an error in one thread can bring down all the threads in the process. Further, individual processes may run as different users and have different permissions.

Subsequent sections introduce some common problems with multi-threaded code, and solves them using low-level synchronisation constructs.

Race Conditions

A race condition is where the behaviour of code depends on the interleaving of multiple threads.This is perhaps the most fundamental problem with multi-threaded programming.

When analysing or writing single-threaded code we only have to think about the sequence of statements right in front of us
we can assume that data will not magically change between statements

However, with improperly written multi-threaded code non-local data can change unexpectedly due to the actions of another thread.

Race conditions can result in a high-level logical fault in your program, or (more excitingly) it may even pierce C++'s statement-level abstraction.

That is, we cannot even assume that single C++ statements execute atomically because they may compile to multiple assembly instructions.

In short, this means that we cannot guarantee the outcome of a statement such as foo += 1; if foo is non-local and may be accessed from multiple threads.

A contrived example follows

Listing 1. A logical race condition
int sharedCounter = 50;
void* workerThread(void*)
{
    while(sharedCounter > 0)
    {
        doSomeWork();
        --sharedCounter;
    }
}


Now imagine that we start a number of threads, all executing workerThread().
If we have just one thread, doSomeWork() is going to be executed the correct number of times (whatever sharedCounter starts out at).

However, with more than one thread doSomeWork() will most likely be executed too many.
 times.
Exactly how many times depends on the number of threads spawned, computer architecture, operating system scheduling and...chance.

The problem arises because we do not test and update sharedCounter as an atomic operation, so there is a period where the value of sharedCounter is incorrect. During this time other threads can pass the test when they really shouldn't have.

The problem arises because we do not test and update sharedCounter as an atomic operation, so there is a period where the value of sharedCounter is incorrect. During this time other threads can pass the test when they really shouldn't have.

The value of sharedCounter on exit tells us how many extra times doSomeWork() is called. With a single thread, the final value of sharedCounter is of course 0. With multiple threads running, it will be between 0 and -N where N is the number of threads.

Mutexes

A mutex is an OS-level synchronisation primitive that can be used to ensure a section of code can only be executed by one thread at a time.

It has two states: locked and unlocked. When the mutex is locked, any further attempt to lock it will block (the calling thread will be suspended). When the mutex becomes unlocked, if there are threads waiting one of these will be resumed and will lock the mutex. Furthermore, the mutex may only be unlocked by the thread that locked it
If we have a resource we need to share between threads, we associate a mutex with it and use the mutex to synchronise resource access. All we need to do is ensure our code locks the mutex before using the resource, and unlocks it after it is finished. This will prevent race conditions related to multiple threads simultaneously accessing that resource
Diagram 1. Two thread contention for a mutex





Mutexes in Practice - Boost.Threads solution
 Boost.Threads is a part of the excellent Boost libraries. It has been intelligently designed to enhance safety by making error-prone code more difficult to write

We'll be using Boost.Threads throughout the tutorial, since we may as well get used to using a well designed C++ library from the outset. Furthermore, the upcoming C++ 0x standard (due sometime this decade) will use Boost.Threads as the model for the new threading support, so learning this library will help future-proof your C++ skills.

Listing 2. Boost.Threads synchronisation
int sharedCounter = 50;
boost::mutex counterMutex;

void solutionWorkerThread()
{
    while(sharedCounter > 0)
    {
        bool doWork = false;
        {
            // scoped_lock locks mutex
            boost::mutex::scoped_lock lock(counterMutex);
            if(sharedCounter > 0)
            {
                --sharedCounter;
                doWork = true;
            }
            // scoped_lock unlocks mutex automatically at end of scope
        }

        if(doWork) doSomeWork();
    }
}

In the above solution, the shared counter is checked and updated as an atomic operation (with respect to multiple threads) so the race condition is solved
Note the way the scoped_lock works: the constructor locks the associated mutex and the destructor unlocks it.
 This is the RAII (Resource Acquisition Is Initialisation) idiom [1], and it helps exception safety. If an exception were thrown while we had locked the mutex, the scoped_lock would be destroyed during the normal stack unwinding process and the mutex would be automatically freed

Exception safety is not an issue with this simple example, since no statement can throw while we have the mutex locked. However, real-world code will almost always benefit from the scoped_lock design.

Unfortunately concurrent code can have many problems: race conditions are only the most fundamental. The next problem we'll cover is called Deadlock, and it commonly arises from the interaction of blocking mutexe

Adding some more info about the mutex which I found diffirent internet url
http://www.cs.cf.ac.uk/Dave/C/node31.html#SECTION003111000000000000000

Why Synchronization

  • When synchronization is the only way to ensure consistency of shared data
  • When threads in two or more processes can use a single synchronization object jointly. Note that the synchronization object should be initialized by only one of the cooperating processes, because reinitializing a synchronization object sets it to the unlocked state.
  •  When synchronization can ensure the safety of mutable data
      When a process can map a file and have a thread in this process get a record's lock. Once the lock is acquired, any other thread in any process mapping the file that tries to acquire the lock is blocked until the lock is released.

Even when accessing a single primitive variable, such as an integer. On machines where the integer is not aligned to the bus data width or is larger than the data width, a single memory load can use more than one memory cycle. While this cannot happen on the SPARC architectures, portable programs cannot rely on this.


Mutual Exclusion Locks

Mutual exclusion locks (mutexes) are a comon method of serializing thread execution. Mutual exclusion locks synchronize threads, usually by ensuring that only one thread at a time executes a critical section of code. Mutex locks can also preserve single-threaded code.
Mutex attributes may be associated with every thread. To change the default mutex attributes, you can declare and initialize an mutex attribute object and then alter specific values much like we have seen in the last chapter on more general POSIX attributes. Often, the mutex attributes are

set in one place at the beginning of the application so they can be located quickly and modified easily.
After the attributes for a mutex are configured, you initialize the mutex itself. Functions are available to initialize or destroy, lock or unlock, or try to lock a mutex.
Initializing a Mutex Attribute Object

The function pthread_mutexattr_init() is used to initialize attributes associated with this object to their default values. It is prototyped by:
int pthread_mutexattr_init(pthread_mutexattr_t *mattr);

Storage for each attribute object is allocated by the threads system during execution. mattr is an opaque type that contains a system-allocated attribute object. The possible values of mattr's scope

are PTHREAD_PROCESS_PRIVATE (the default) and PTHREAD_PROCESS_SHARED.The default value of the pshared attribute when this function is called is PTHREAD_PROCESS_PRIVATE, which means that the initialized mutex can be used within a process.
Before a mutex attribute object can be reinitialized, it must first be destroyed by pthread_mutexattr_destroy() (see below). The pthread_mutexattr_init() call returns a pointer to an opaque object. If the object is not destroyed, a memory leak will result.

pthread_mutexattr_init() returns zero after completing successfully. Any other returned value indicates that an error occurred.
A simple example of this function call is:
#include <pthread.h>
pthread_mutexattr_t mattr; 
int ret; 

/* initialize an attribute to default value */ 

ret = pthread_mutexattr_init(&mattr);

Destroying a Mutex Attribute Object

The function pthread_mutexattr_destroy() deallocates the storage space used to maintain the attribute object created by pthread_mutexattr_init(). It is prototyped by:
int pthread_mutexattr_destroy(pthread_mutexattr_t *mattr);
which returns zero after completing successfully. Any other returned value indicates that an error occurred.
The function is called as follows:
#include <pthread.h>
pthread_mutexattr_t mattr; 
int ret; 

/* destroy an attribute */ 
ret = pthread_mutexattr_destroy(&mattr);

The Scope of a Mutex

The scope of a mutex variable can be either process private (intraprocess) or system wide (interprocess).

The function pthread_mutexattr_setpshared() is used to set the scope of a mutex atrribute and it is prototype as follows:
int pthread_mutexattr_setpshared(pthread_mutexattr_t *mattr, int pshared);
If the mutex is created with the pshared (POSIX) attribute set to the PTHREAD_PROCESS_SHARED

state, and it exists in shared memory, it can be shared among threads from more than one process.

This is equivalent to the USYNC_PROCESS flag in mutex_init() in Solaris threads. If the mutex pshared attribute is set to PTHREAD_PROCESS_PRIVATE, only those threads created by the same

process can operate on the mutex. This is equivalent to the USYNC_THREAD flag in mutex_init() in Solaris threads.

pthread_mutexattr_setpshared() returns zero after completing successfully. Any other returned value indicates that an error occurred.
A simple example call is:
#include <pthread.h> 

pthread_mutexattr_t mattr; 
int ret; 

ret = pthread_mutexattr_init(&mattr); 

/* resetting to its default value: private */
ret = pthread_mutexattr_setpshared(&mattr, PTHREAD_PROCESS_PRIVATE);

The function pthread_mutexattr_getpshared(pthread_mutexattr_t *mattr, int *pshared) may be used to obtain the scope of a current thread mutex as follows:
#include <pthread.h>
pthread_mutexattr_t mattr; 
int pshared, ret; 

/* get pshared of mutex */ ret =
pthread_mutexattr_getpshared(&mattr, &pshared);

Initializing a Mutex

The function pthread_mutex_init() to initialize the mutex, it is prototyped by:
int pthread_mutex_init(pthread_mutex_t *mp, const pthread_mutexattr_t *mattr);

Here, pthread_mutex_init() initializes the mutex pointed at by mp to its default value if mattr is NULL, or to specify mutex attributes that have already been set with pthread_mutexattr_init().
A mutex lock must not be reinitialized or destroyed while other threads might be using it. Program failure will result if either action is not done correctly. If a mutex is reinitialized or destroyed, the application must be sure the mutex is not currently in use. pthread_mutex_init() returns zero

after completing successfully. Any other returned value indicates that an error occurred.
A simple example call is:
#include <pthread.h>

pthread_mutex_t mp = PTHREAD_MUTEX_INITIALIZER; 
pthread_mutexattr_t mattr; 
int ret; 

/* initialize a mutex to its default value */ 
ret = pthread_mutex_init(&mp, NULL);

When the mutex is initialized, it is in an unlocked state. The effect of mattr being NULL is the same as passing the address of a default mutex attribute object, but without the memory overhead.

Statically defined mutexes can be initialized directly to have default attributes with the macro PTHREAD_MUTEX_INITIALIZER.
To initialise a mutex with non-default values do something like:
/* initialize a mutex attribute */ 
ret = pthread_mutexattr_init(&mattr);

/* change mattr default values with some function */
 ret = pthread_mutexattr_*();

/* initialize a mutex to a non-default value */
ret = pthread_mutex_init(&mp, &mattr);

Locking a Mutex

The function pthread_mute_lock() is used to lock a mutex, it is prototyped by:
int pthread_mutex_lock(pthread_mutex_t *mp);

pthread_mute_lock() locks the mutex pointed to by mp. When the mutex is already locked, the calling thread blocks and the mutex waits on a prioritized queue. When pthread_mute_lock()

returns, the mutex is locked and the calling thread is the owner. pthread_mute_lock() returns zero after completing successfully. Any other returned value indicates that an error occurred.
Therefor to lock a mutex mp on would do the following:
#include <pthread.h> 
pthread_mutex_t mp; 
int ret; 

ret = pthread_mutex_lock(&mp);
To unlock a mutex use the function pthread_mutex_unlock() whose prototype is:
int pthread_mutex_unlock(pthread_mutex_t *mp);

Clearly, this function unlocks the mutex pointed to by mp.
The mutex must be locked and the calling thread must be the one that last locked the mutex (i.e. the owner). When any other threads are waiting for the mutex to become available, the thread at


the head of the queue is unblocked. pthread_mutex_unlock() returns zero after completing successfully. Any other returned value indicates that an error occurred.
A simple example call of pthread_mutex_unlock() is:
#include <pthread.h> 

pthread_mutex_t mp; 
int ret; 

/* release the mutex */ 
ret = pthread_mutex_unlock(&mp);

Lock with a Nonblocking Mutex

The function pthread_mutex_trylock() to attempt to lock the mutex and is prototyped by:
int pthread_mutex_trylock(pthread_mutex_t *mp);

This function attempts to lock the mutex pointed to by mp. pthread_mutex_trylock() is a nonblocking version of pthread_mutex_lock(). When the mutex is already locked, this call returns with an error. Otherwise, the mutex is locked and the calling thread is the owner. pthread_mutex_trylock() returns zero after completing successfully. Any other returned value indicates that an error occurred.
The function is called as follows:
#include <pthread.h> 
pthread_mutex_t mp;

/* try to lock the mutex */
int ret; ret = pthread_ mutex_trylock(&mp);

Destroying a Mutex

The function pthread_mutex_destroy() may be used to destroy any state associated with the mutex. It is prototyped by:
int pthread_mutex_destroy(pthread_mutex_t *mp);
and destroys a mutex pointed to by mp.
Note: that the space for storing the mutex is not freed. pthread_mutex_destroy() returns zero after completing successfully. Any other returned value indicates that an error occurred.
It is called by:
#include <pthread.h> 
pthread_mutex_t mp;
int ret; 

/* destroy mutex */
ret = pthread_mutex_destroy(&mp);

Mutex Lock Code Examples

Here are some code fragments showing mutex locking.

Mutex Lock Example

We develop two small functions that use the mutex lock for different purposes.
  • The increment_count function() uses the mutex lock simply to ensure an atomic update of the shared variable, count.
  • The get_count() function uses the mutex lock to guarantee that the (long long) 64-bit quantity count is read atomically. On a 32-bit architecture, a long long is really two 32-bit quantities.
The 2 functions are as follows:
#include <pthread.h> 
pthread_mutex_t count_mutex; 
long long count; 

void increment_count() 
   { pthread\_mutex\_lock(&count_mutex); 
     count = count + 1;
     pthread_mutex_unlock(&count_mutex); 
   }   


long long get_count()  
   { long long c;
     pthread\_mutex\_lock(&count_mutex); 
     c = count; 
     pthread_mutex_unlock(&count_mutex);
     return (c); 
    }
Recall that reading an integer value is an atomic operation because integer is the common word size on most machines

Using Locking Hierarchies: Avoiding Deadlock

You may occasionally want to access two resources at once. For instance, you are using one of the resources, and then discover that the other resource is needed as well. However, there could be a

problem if two threads attempt to claim both resources but lock the associated mutexes in different orders.
In this example, if the two threads lock mutexes 1 and 2 respectively, then a deadlock occurs when each attempts to lock the other mutex.

The best way to avoid this problem is to make sure that whenever threads lock multiple mutexes, they do so in the same order. This technique is known as lock hierarchies: order the mutexes by logically assigning numbers to them. Also, honor the restriction that you cannot take a mutex that is assigned n when you are holding any mutex assigned a number greater than n.

Note: The lock_lint tool can detect the sort of deadlock problem shown in this example.
The best way to avoid such deadlock problems is to use lock hierarchies. When locks are always taken in a prescribed order, deadlock should not occur. However, this technique cannot always be used :
  • sometimes you must take the mutexes in an order other than prescribed.
  • To prevent deadlock in such a situation, use pthread_mutex_trylock(). One thread must release its mutexes when it discovers that deadlock would otherwise be inevitable.
The idea of Conditional Locking use this approach:
Thread 1:
pthread_mutex_lock(&m1); 
pthread_mutex_lock(&m2); 

/* no processing */
pthread_mutex_unlock(&m2);
pthread_mutex_unlock(&m1);
Thread 2:
for (; ;) {
  pthread_mutex_lock(&m2); 
  if(pthread_mutex_trylock(&m1)==0) 
    /* got it! */ 
    break; 
 /* didn't get it */ 
 pthread_mutex_unlock(&m2); 
 } 
/* get locks; no processing */
pthread_mutex_unlock(&m1); 
pthread_mutex_unlock(&m2);

In the above example, thread 1 locks mutexes in the prescribed order, but thread 2 takes them out of order. To make certain that there is no deadlock, thread 2 has to take mutex 1 very carefully; if

it were to block waiting for the mutex to be released, it is likely to have just entered into a deadlock with thread 1. To ensure this does not happen, thread 2 calls pthread_mutex_trylock(),

which takes the mutex if it is available. If it is not, thread 2 returns immediately, reporting failure. At this point, thread 2 must release mutex 2, so that thread 1 can lock it, and then release both mutex 1 and mutex 2.

Nested Locking with a Singly Linked List

We have met basic linked structues in Section 10.3, when using threads which share a linked list structure the possibility of deadlock may arise.
By nesting mutex locks into the linked data structure and a simple ammendment of the link list code we can prevent deadlock by taking the locks in a prescribed order.
The modified linked is as follows:
typedef struct node1 { 
     int value; 
     struct node1 *link;
     pthread_mutex_t lock; 
   } node1_t;
Note: we simply ammend a standard singly-linked list structure so that each node containing a mutex.
Assuming we have created a variable node1_t ListHead.
To remove a node from the list:
  • first search the list starting at ListHead (which itself is never removed) until the desired node is found.
  • To protect this search from the effects of concurrent deletions, lock each node before any of its contents are accessed. Because all searches start at ListHead, there is never a deadlock because the locks are always taken in list order.
  • When the desired node is found, lock both the node and its predecessor since the change involves both nodes. Because the predecessor's lock is always taken first, you are again protected from deadlock.
The C code to remove an item from a singly linked list with nested locking is as follows:
node1_t *delete(int value) 
  { node1_t *prev,
    *current; prev = &ListHead;
 
   pthread_mutex_lock(&prev->lock); 
   while ((current = prev->link) != NULL) 
     { pthread_mutex_lock(&current->lock); 
       if (current->value == value) 
         { prev->link = current->link;
           pthread_mutex_unlock(&current->lock); 
           pthread_mutex_unlock(&prev->lock);
           current->link = NULL; return(current); 
         } 
       pthread_mutex_unlock(&prev->lock);
       prev = current; 
      } 
    pthread_mutex_unlock(&prev->lock); 
    return(NULL); 
   }

 





Deadlock
Deadlock is where one or more threads wait for resources that can never become available.
The classic case (illustrated below) is where two threads both require two shared resources

and they use blocking mutexes to lock them in opposite order.

Thread A locks resource X while thread B locks resource Y. Next, thread A attempts to lock resource Y and thread B attempts to lock resource X: since both resources are already locked (by the other thread), both threads wait indefinitely.
The following diagram should make the sequence clear.
Classic deadlock






























 It is easy to write code where deadlock is inevitable, here is the classic case

boost::mutex resourceX;
boost::mutex resourceY;
void deadlockThreadAFunc() {

 uint counter = 0;
  while(true) {
   boost::mutex::scoped_lock lockX(resourceX);
   boost::thread::yield(); // to encourage deadlock
   boost::mutex::scoped_lock lockY(resourceY);
   std::cout << "threadA working: " << ++counter << "\n";
  }
}

void deadlockThreadBFunc(){

uint counter = 0;
while(true)
{

     boost::mutex::scoped_lock lockY(resourceY);
     boost::thread::yield(); // to encourage deadlock
     boost::mutex::scoped_lock lockX(resourceX);
}


std::cout << "threadB working: " << ++counter << "\n";
}

The yield statements in the above example force the current thread to stop executing and allow another thread to continue They are for demonstration purposes only, to encourage the deadlock to occur quickly  Without them, a single core machine may run for some time without having a context switch between the two resource locking statements (and thus triggering the deadlock).

For this toy example the fix is simple but non-intuitive.
All we need to do is ensure we lock resources in a consistent order, so changing deadlockThreadBFunc() to lock resourceX before resourceY ensures there will be no deadlock

Unlock order is not significant.
One valuable technique to ensure strict lock-order discipline is to always lock a group of resources in the order of their memory address.
However, it should be clear that deadlock will become much more of a problem in non-trivial code. with complex data sharing requirements where resources are being locked at multiple levels and in many different contexts.

This is one of the main reasons multi-threaded programming is so difficult
it sometimes requires coordination between multiple levels in your code, and this is the enemy of encapsulation.

Self Deadlock
Another problem is self-deadlock. Self-deadlock occurs when a single thread attempts to lock a mutex twice.the second attempt will block indefinitely
This can easily happen when the same resource is used at multiple levels within an algorithm.

In particular, consider a class that attempts to provide a threadsafe interface by synchronising all member function calls with a single internal mutex The mutex is locked at the beginning of every method,and unlocked on method return


If that class now calls a member function from within a member function, there will be a self-deadlock

To counter this problem there is the concept of recursive mutexes
A recursive mutex will allow multiple locks from within a single thread to succeed
though that thread must unlock the mutex as many times as it has locked it.
The disadvantage of a recursive mutex is a slight performance decrease.

Livelock
Livelock is when multiple threads continue to run (ie. do not block indefinitely like in deadlock), but the system as a whole does not make progress due to repeating patterns of non-productive resource contention

Listing 1. Contrived livelock

boost::try_mutex resourceX;
boost::try_mutex resourceY


void threadAFunc() {

uint counter = 0;
 while(true)
 {
        boost::try_mutex::scoped_lock lockX(resourceX);
        boost::thread::yield(); // encourage the livelock

         boost::try_mutex::scoped_try_lock lockY(resourceY);

        if(lockY.locked() == false) continue;
         

        std::cout << "threadA working: " << ++counter << "\n";
   }
}

void threadBFunc()
{

            uint counter = 0;
            while(true)
            {

                      boost::try_mutex::scoped_lock lockY(resourceY);
                      boost::thread::yield(); // encourage the livelock
 
                      boost::try_mutex::scoped_try_lock lockX(resourceX);
                      if(lockX.locked() == false) continue;

                      std::cout << "threadB working: " << ++counter << "\n";            }
}

This code exhibits an almost full livelock, though for each yield statement removed the lock gets a little less severe. When I run this example, at best threads do a few pieces of work per second. How does the livelock occur? The probable sequence is illustrated below:
                 
 

Another use of the term livelock involves starvation, where one part of a system monopolises system resources and starves another part of the system. For example, a system composed of request-queueing and request-servicing components might exhibit starvation if an overwhelming number of requests cause the request-queueing component to use all system resources.


Condition VariablesIn general and this condition will be satisfied by another group of threads.
An example: there are two groups of threads: one producing something, and the other consuming it.

this means that one group of threads should only execute when a given condition becomes true

Mutexes cater to the most general form of resource sharing, but sometimes threads have more specific sharing requirements.Condition Variables allow us to express and control a directed dependency between threads.

When thread operation is coordinated by a condition variable. one group of threads waits until a condition is triggered. at which point one or more of the waiting threads is woken
An example: there are two groups of threads: one producing something, and the other consuming it.
The Producer-Consumer pattern is also useful outside multi-threaded programming, where it allows you to decouple data/event/request/whatever production from consumption
For our contrived example, we produce and consume characters from the alphabet.
std::queue<char> producedChars;
boost::condition characterAvailable;
boost::mutex characterMutex;
void conditionDataProducer()
{
    unsigned char c = 'A';
    for(uint i = 0; i < numCharacters;)
    {
        if((c >= 'A' && c <= 'Z'))
        {
            boost::mutex::scoped_lock lock(characterMutex);
            producedChars.push(c++);
            characterAvailable.notify_one();
            ++i;
        }
        else
        {
            c = 'A';
        }
    }
    boost::mutex::scoped_lock lock(characterMutex);
    producedChars.push(EOF);
    characterAvailable.notify_one();
}
void conditionDataConsumer()
{
    boost::mutex::scoped_lock lock(characterMutex);
    while(true)
    {
        // on entry, releases mutex and suspends this thread
        // on return, reacquires mutex
        while(producedChars.empty())
        {
            characterAvailable.wait(lock);
        }
        if(producedChars.front() == EOF) break;
        producedChars.pop();
    }
}

Take a look at the consumer first:it acquires a mutex and then uses the condition variable to wait.
When wait is called the mutex is unlocked and the calling thread is suspended.
The consumer thread will now only be resumed when the condition represented by characterAvailable becomes true.
The producer simply pushes characters onto the shared container and then calls notify_one.
This will allow one of the threads waiting on this condition (a consumer) to resume and process the new data.
This will be much more efficient than having consumers endlessly polling an empty queue.
Condition variables are also useful in implementing the Monitor Object concurrency pattern, which we talk about next.
Multithreading Patterns and Idioms
Scoped Locking

We've been using the scoped-locking idiom throughout the first chapter, but it deserves to be treated separately.
After all, much multi-threaded programming is still done using more procedural APIs such as POSIX threads (pthreads), in which the programmer must explicitly unlock mutexes.
The Boost.Threads library uses the Resource-Acquisition-Is-Initialisation (RAII) idiom[1] in the implementation of its scoped locks.
When you create a scoped_lock object (associated with the mutex you want to acquire) the constructor locks the mutex,and the destructor unlocks the mutex
C++ has deterministic destruction, so the language guarantees that local objects
will be destroyed when a scope is exited by any means.
The most obvious benefit of this idiom is that it is now impossible to forget to unlock a mutex
The scoped_lock destructor will be called no matter how or where the function is exited.
so we nolonger have to check that all return paths unlock the mutex.
This is an especially big win when a function has multiple exit points, or
when a maintenance programmer adds a new exit point without fully comprehending the function.
Another important benefit of scoped locking is that we gain a measure of exception safety
When exceptions are thrown local objects are destroyed during stack unwinding
so the scoped_lock will help ensure the function exits in a consistent state.
Arguably, a disadvantage of scoped locking is that it decreases the clarity of your locking strategy since unlocking is implicit rather than explicit
Read/Write Lock
A Read/Write Lock is a performance improvement over a standard mutex for cases where reads outnumber writes.
The idea is to divide locks up into two classes: reading and writing.Multiple simultaneous read locks may be held, but write locks are exclusively held.
The exclusive writing lock ensures that race conditions do not occur,since if one client is writing to the data no other client may read or write.
Also, the allowance for multiple simultaneous read locks decreases resource contention since multiple readers can safely use the shared data
This increases performance over a standard mutex for the assumed usage pattern of frequent simultaneous reads and infrequent writes.
However, a simple Read/Write Lock implementation that avoids starvation in all cases is not easy to create.
The following implementation prioritises write locks, and so does not allow the assumed greater number of readers to starve writers.
This is accomplished by making new read locks wait if there is a writer waiting for a lock.
The disadvantage of this implementation is that if usage is not as expected writers may overwhelm the mutex and starve readers indefinitely.
Listing 1. Read/Write Lock favouring writers
// multiple clients may read simultaneously
// but write access is exclusive
// writers are favoured over readers
class ReadWriteMutex : boost::noncopyable
{
public:
    ReadWriteMutex() :
        m_readers(0),
        m_pendingWriters(0),
        m_currentWriter(false)
    {}
    // local class has access to ReadWriteMutex private members, as required
    class ScopedReadLock : boost::noncopyable
    {
    public:
        ScopedReadLock(ReadWriteMutex& rwLock) :
            m_rwLock(rwLock)
        {
            m_rwLock.acquireReadLock();
        }
        ~ScopedReadLock()
        {
            m_rwLock.releaseReadLock();
        }
    private:
        ReadWriteMutex& m_rwLock;
    };
    class ScopedWriteLock : boost::noncopyable
    {
    public:
        ScopedWriteLock(ReadWriteMutex& rwLock) :
            m_rwLock(rwLock)
        {
            m_rwLock.acquireWriteLock();
        }
        ~ScopedWriteLock()
        {
            m_rwLock.releaseWriteLock();
        }
    private:
        ReadWriteMutex& m_rwLock;
    };

private: // data
    boost::mutex m_mutex;
    unsigned int m_readers;
    boost::condition m_noReaders;
    unsigned int m_pendingWriters;
    bool m_currentWriter;
    boost::condition m_writerFinished;

private: // internal locking functions
    void acquireReadLock()
    {
        boost::mutex::scoped_lock lock(m_mutex);
        // require a while loop here, since when the writerFinished condition is notified
        // we should not allow readers to lock if there is a writer waiting
        // if there is a writer waiting, we continue waiting
        while(m_pendingWriters != 0 || m_currentWriter)
        {
            m_writerFinished.wait(lock);
        }
        ++m_readers;
    }
    void releaseReadLock()
    {
        boost::mutex::scoped_lock lock(m_mutex);
        --m_readers;
        if(m_readers == 0)
        {
            // must notify_all here, since if there are multiple waiting writers
            // they should all be woken (they continue to acquire the lock exclusively though)
            m_noReaders.notify_all();
        }
    }
    // this function is currently not exception-safe:
    // if the wait calls throw, m_pendingWriter can be left in an inconsistent state
    void acquireWriteLock()
    {
        boost::mutex::scoped_lock lock(m_mutex);
        // ensure subsequent readers block
        ++m_pendingWriters;
       
        // ensure all reader locks are released
        while(m_readers > 0)
        {
            m_noReaders.wait(lock);
        }
        // only continue when the current writer has finished
        // and another writer has not been woken first
        while(m_currentWriter)
        {
            m_writerFinished.wait(lock);
        }
        --m_pendingWriters;
        m_currentWriter = true;
    }
    void releaseWriteLock()
    {       
        boost::mutex::scoped_lock lock(m_mutex);
        m_currentWriter = false;
        m_writerFinished.notify_all();
    }
};
 Monitor Object
The Monitor Object pattern[1] encapsulates an object's synchronisation policy along with its functionality providing a thread-safe public interface and making the object a reliable system building  block
The Monitor Object is a normal object except that it provides a threadsafe interface, one that
may be called safely without external synchronisation by any number of concurrent clients.
The simplest form of this pattern is embodied by a class with a public interface synchronised with a per-object mutex.The first thing each interface method does is to lock that object's mutex, and the last thing it does is to unlock the mutex.
This ensures only one interface method is executing at any time.
However, another common aspect of a Monitor Object is to have more complex blocking
interactions between the interface methods.
These interactions are coordinated by condition variables For example, one interface method may internally reach a point where it must wait for some condition within the object to become true before it can continue.
This is the kind of situation condition variables were born to handle.
Note that as the synchronisation complexity inside a Monitor Object rises, the value of encapsulating this complexity also rises (the burden on clients remains minimal).
In lieu of the Monitor Object pattern, relying on clients to coordinate synchronisation could improve performance,but scattering an object's locking policy throughout its clients is not going
to be a reliable design.

 
Let's see some code.We'll reimplement the example from the condition variable section,
replacing the std::queue with a Monitor Object.

Listing 1. MonitorQueue.hpp
template <typename DataT>
class MonitorQueue : boost::noncopyable
{
    public:
    void push(const DataT& newData)
    {
        boost::mutex::scoped_lock lock(m_monitorMutex);
        m_queue.push(newData);
        m_itemAvailable.notify_one();
     }

    DataT popWait()
    {
        boost::mutex::scoped_lock lock(m_monitorMutex);

        if(m_queue.empty())
        {
            m_itemAvailable.wait(lock);
        }

        DataT temp(m_queue.front());
        m_queue.pop();
        return temp;
    }

private:
    std::queue<DataT> m_queue;

    boost::mutex m_monitorMutex;
    boost::condition m_itemAvailable;
};
Listing 2. main.cpp
MonitorQueue<char> producedChars;

void dataProducer()
{
    unsigned char c = 'A';
    for(uint i = 0; i < numCharacters;)
    {
        if((c >= 'A' && c <= 'Z'))
        {
            producedChars.push(c++);
            ++i;
        }
        else
        {
            c = 'A';
        }
    }
    producedChars.push(EOF);
}

void dataConsumer()
{
    while(true)
    {
        if(producedChars.pop() == EOF) return;
    }
}
As you can see, I parameterised the MonitorQueue with the type of data
it handles (I couldn't bear to hardcode this).

You should also be able to see how much simpler the clients are than in the original example.
Additionally, it is now a simple matter to create new consumers or producers:
we've abstracted away the synchronisation details so all this new code has to worry about is doing its specific job.


Active Object
The Active Object pattern[1] decouples method invocation and method execution by doing all work in a private per-object thred.

Whereas in the Monitor Object pattern interface methods could block, Active Object interface methods
return after queueing the relevant processing in an internal "Activation List".


The full Active Object pattern can be quite complex since we need some way of returning method return values,as well as a way to bind method calls to their parameters for deferred execution


Implementations will vary, but conceptually the interface methods will immediately return an object called a "Future",


The Proxy provides the Active Object public interface, and runs in the calling thread
It instantiates a Concrete Method implementation object along with the corresponding
Future for the result and these are added to the Activation Queue.

The Active Object's private thread will continue to dequeue and execute methods while they are available from the Activation Queue.

An important implementation detail is that Future lifetime is non-trivial it depends on both the client and the concrete implementation methods.

In C++ a good solution is to use a shared pointer so that the Future is cleaned up when there are no more references to it.

Let's have a look at a simplified implementation of the ActiveObject pattern.
he servant we are wrapping up has a single function that takes a reasonably long time to run
In fact, the first parameter tells the doSomeWork function how long, in milliseconds, to execute for

and we'll use this to test the results we get.

Listing 1. Servant interface.
class Servant
{
public:
    Servant();
    int doSomeWork(unsigned int, bool);
};
Now we'll turn this into an ActiveObject, but first let's see how it will be used:

Listing 2. int main
int main(int argc, char* argv[])
{
    ActiveObjectProxy activeObject;

    std::cout << "Dispatching...\n";
    shared_ptr<FutureResult<int> > result_a = activeObject.doSomeWork(500, false);
    shared_ptr<FutureResult<int> > result_b = activeObject.doSomeWork(1000, false);
    shared_ptr<FutureResult<int> > result_c = activeObject.doSomeWork(2000, false);
    shared_ptr<FutureResult<int> > result_d = activeObject.doSomeWork(4000, false);

    std::cout << "Waiting for results...\n";
    std::cout << result_a->getResult() << "\n"; // outputs 500 after blocking for ~500ms
    std::cout << result_b->getResult() << "\n"; // outputs 1000 after blocking for an additional ~1000ms
    std::cout << result_c->getResult() << "\n"; // outputs 2000 after blocking for an additional ~2000ms
    std::cout << result_d->getResult() << "\n"; // outputs 4000 after blocking for an additional ~4000ms

    std::cout << "Press <Enter> to continue...";
    std::cin.get();

    return 0;
}
The way the above code should work is that the method request dispatches should execute very quickly, and then the code should block when actually retrieving the results. In a less contrived example, you might be doing other work while periodically polling the FutureResult to see when the method request was complete.

Listing 3. ActiveObjectProxy
/**
The ActiveObjectProxy coordinates the behaviour of the ActiveObject including instantiating
shared FutureResult instances, binding method request parameters, and adding concrete methods
to the activation queue.
*/
class ActiveObjectProxy
{
public:
    ActiveObjectProxy() :
        m_exit(false)
    {
        m_servantThread.reset(new boost::thread(boost::bind(&ActiveObjectProxy::processActivationQueue, this)));
    }

    ~ActiveObjectProxy()
    {
        // ensure servant thread is shut down
        {
            boost::mutex::scoped_lock lock(m_exitMutex);
            m_exit = true;
        }
        m_servantThread->join();
    }


    typedef FutureResult<int> ResultT;
    typedef shared_ptr<ResultT> RefCountedResultT;
    typedef boost::function<int (void)> CallbackT;

    RefCountedResultT doSomeWork(unsigned int inputInteger, bool inputBool)
    {
        // instantiate ref-counted FutureResult
        RefCountedResultT futureResult(new ResultT);

        // bind parameters to the relevant Servant method
        CallbackT callback = boost::bind(&Servant::doSomeWork, m_servant, inputInteger, inputBool);

        // instantiate and enqueue a concrete method request
        shared_ptr<MethodRequest> concreteMethod(new ConcreteMethod<int>(futureResult, callback));
        m_activationQueue.push(concreteMethod);

        return futureResult;
    }

private:
    // this method is executed in a private thread,
    // allowing the servant to do work in the background
    void processActivationQueue()
    {
        while(true)
        {
            {
                boost::mutex::scoped_lock lock(m_exitMutex);
                if(m_exit) return;
            }

            // attempt to unqueue and process the next waiting method request
            // but don't wait any longer than 100ms
            const int waitMs = 100;
            try
            {
                m_activationQueue.pop(waitMs)->execute();
            }
            catch(const WaitTimeout&) {}
            // exceptions should not generally be used for non-exceptional flow control,
            // but it is a simple, adequate solution for this toy example
            // a better, but more confusing solution might use boost::optional
        }
    }


private:
    Servant m_servant;
    scoped_ptr<boost::thread> m_servantThread;

    typedef MonitorQueue<shared_ptr<MethodRequest> > ActivationQueueT;
    ActivationQueueT m_activationQueue;

    boost::mutex m_exitMutex;
    bool m_exit;
};



 The ActiveObjectProxy coordinates the whole ActiveObject.
It binds parameters and enqueues concrete method requests in the activation queue
nd also manages the servant thread which processes those method requests