Discussion:
free vector memory
(too old to reply)
Zhan Weiming
2008-06-19 10:20:26 UTC
Permalink
Hi,

I have simplified codes as below,

typedef struct
{
int x;
int y;
}TypeA;
typedef vector<TypeA> TypeB;
typedef vector<TypeB*> TypeC;

TypeA a;
TypeB b;
TypeC c;

MainLoop()
{
do
{
//insert one object to c
InsertOneRecord();

//take some actions

//erase object in c
DeleteOneItem(idx); //expect to erase memory alloacted.
} while(somecondition);//somecondition keeps true for a long time.
}

InsertOneRecord()
{
GetB(b); // push some items to b
TypeB *pb = new TypeB;
FillB(b, pb); //push items in b to *pb
c.push_back(pb);
}

DeleteOneItem(int index)
{
TypeB *pb = c[index];
delete pb;

vector <TypeB*>::iterator itr;
itr = c.begin();
itr += index;
c.erase(index);
}


after running for some time, it raised "out of memory".

Any body knows where the memory leak? or how to clear the memory as expect?

Thanks in advance.
Zhan
Alan Bellingham
2008-06-19 10:49:44 UTC
Permalink
Post by Zhan Weiming
Any body knows where the memory leak? or how to clear the memory as expect?
I'm not going to look that closely.

What you have is a typical example of why not to store raw pointers in a
vector. What I suggest you do is to go to www.boost.org and get the
smart_ptr part of the boost library. Use the boost::shared_ptr<>
template.

Alan Bellingham
--
Team Browns
ACCU Conference 2009: to be announced
Chris Uzdavinis (TeamB)
2008-06-19 13:50:15 UTC
Permalink
Post by Zhan Weiming
Hi,
I have simplified codes as below,
Simplified too much, I'm afraid. Since it doesn't compile, it
probably isn't representing your problem very well either. Certainly,
the problem cannot be reproduced by running this code.

Perhaps you left out something important to the problem you're
describing?
Post by Zhan Weiming
typedef struct
{
int x;
int y; }TypeA;
No need for typedefs on structs, unless this is extracted from a C
header.
Post by Zhan Weiming
typedef vector<TypeA> TypeB;
typedef vector<TypeB*> TypeC;
Storing raw pointers to allocated objects is a frequent source of
memory related errors. If a vector holds raw pointers, it's best that
it not also have ownership responsibility, but be an observer. Even
then, you have memory issues if the object is deleted elsewhere and
left in the vector.

Holding smart pointers will solve this problem. Using a shared_ptr
the vector element can own and manage the allocated underlying
object. Using weak_ptr, the vector can safely be an observer. (Weak
pointers refer to objects in a shared_ptr, but do not keep them
alive. Additionally, if the shared_ptr destroys the object, all weak
pointers that refer to it become null automatically.)
Post by Zhan Weiming
TypeA a;
TypeB b;
TypeC c;
Global variables?
Post by Zhan Weiming
MainLoop()
{
do
{ //insert one object to c
InsertOneRecord();
ok so far.
Post by Zhan Weiming
//take some actions
Perhaps you're using a lot of memory in some of these actions?
Post by Zhan Weiming
//erase object in c
DeleteOneItem(idx); //expect to erase memory alloacted.
Conceptually sound, but the implementation is far from safe.
Post by Zhan Weiming
} while(somecondition);//somecondition keeps true for a long time.
}
InsertOneRecord()
{
GetB(b); // push some items to b
TypeB *pb = new TypeB;
FillB(b, pb); //push items in b to *pb
c.push_back(pb); }
Using a global "b" object is a bad choice. You are giving up
multi-threadded options, among other issues. Since this function
needs a TypeB object, create it here. Or have GetB create and return it
to you, so there isn't the "create it, initialize it, then overwrite
that by calling GetB() immediately afterwords.

Further, you allocate a TypeB and store it in a raw pointer. After
that, making calls to FillB or push_back may throw, and so the object
will be leaked if an exception occurs.

FillB would be best a member function of TypeB, since then the new
operation could allocate and initialize it in one step, rather than
allocating and default initialize it, only to immediately throw away
that default initialization and overwrite it with real values in a
function call.

Again, using a smart pointer would make this function exception safe
too. (pb would become a smart pointer instead of a raw pointer. Then
if an exception is thrown, the smart pointer's destructor would
deallocate the underlying object, averting a memory leak.)
Post by Zhan Weiming
DeleteOneItem(int index)
{
TypeB *pb = c[index];
delete pb;
vector <TypeB*>::iterator itr;
itr = c.begin();
itr += index;
c.erase(index);
}
Technically, you want to delete the pointer only after erasing it from
the vector. Otherwise there is a period of time where the vector
holds a dangling pointer, and if you're pedantically following the
language standard, even reading the address held in an invalid pointer
without dereferencing it can produce undefined behavior.

Also, you probably want range checking here, and may benefit from
using the std::advance function too.

Your code shows a design habit of declaring variables in one place and
initializing them later. This is frequently necessary in C, but in
C++ much less so due to constructors. The danger is the window of
code where the values are wrong and if accidently used would result in
misbehaving code. The other danger is needless performance
degradation. Unless the constructor is trivial, it's faster to create
the object "correctly" the first time, directly creating it with
initially correct values. It doesn't make sense to create it with a
default (but wrong) value and then write new values on top of that.


When done with initialization and smart pointers, the code is much
safer, concise, and easier to read. (I think.) Here is a safer
rewrite of your DeleteOneItem function:

#include <iterator>
#include <stdexcept>
#include <memory>


void
DeleteOneItem(int index)
{
if (index < 0 || index >= c.size()) {
throw std::range_error("DeleteOneItem: index out of bounds");
}
std::auto_ptr<TypeB> killer(c[index]);
c.erase(std::advance(c.begin(), index));
}

auto_ptr is a smart pointer that deletes whatever it holds when it
goes out of scope. In the future this should be a scoped_ptr, but
that's not yet part of the C++ language (and auto_ptr is going to be
labeled as deprecated.)
Post by Zhan Weiming
after running for some time, it raised "out of memory".
Perhaps your vectors are simply growing too much?

Perhaps the rest of your application is doing something with a lot of
memory?
Post by Zhan Weiming
Any body knows where the memory leak?
Nobody but you can answer that, since only you have your source code.

(Just to be clear, I'm not asking for your whole application, but for
representative code, it needs to be complete enough to actually target
and show the problem.)

Also, it'd be useful if you gave a description of how this is used.
As fast as the CPU can go, or just every now and then? How big are
the vectors anyway? How much data are you trying to store? Etc.
Post by Zhan Weiming
or how to clear the memory as expect?
How do you know that this vector memory management is the actual
source of the problem? Are exceptions occurring? If so, that could
cause leaks which, if done often enough, could accumulate into a lot
of memory.
--
Chris (TeamB);
Bob Gonder
2008-06-19 17:42:01 UTC
Permalink
Post by Zhan Weiming
void
DeleteOneItem(int index)
{
if (index < 0 || index >= c.size()) {
throw std::range_error("DeleteOneItem: index out of bounds");
}
std::auto_ptr<TypeB> killer(c[index]);
c.erase(std::advance(c.begin(), index));
}
Oh, cool..you had me going for a minute there,
creating, then never using killer...auto-destruct.

OTOH, if as you suggested, c contained smart
pointers to b vectors, then wouldn't c.erase
destruct the b vector without the extra killer?

Likewise, delete c would delete all b vectors?
Post by Zhan Weiming
Post by Zhan Weiming
DeleteOneItem(int index)
{
vector <TypeB*>::iterator itr;
itr = c.begin();
itr += index;
c.erase(index);
}
Erasing index instead of itr.
Chris Uzdavinis (TeamB)
2008-06-19 17:44:35 UTC
Permalink
Post by Bob Gonder
Post by Zhan Weiming
void
DeleteOneItem(int index)
{
if (index < 0 || index >= c.size()) {
throw std::range_error("DeleteOneItem: index out of bounds");
}
std::auto_ptr<TypeB> killer(c[index]);
c.erase(std::advance(c.begin(), index));
}
Oh, cool..you had me going for a minute there,
creating, then never using killer...auto-destruct.
OTOH, if as you suggested, c contained smart
pointers to b vectors, then wouldn't c.erase
destruct the b vector without the extra killer?
Likewise, delete c would delete all b vectors?
Yes, if the vector elements held smart pointers, then there would be
no need for killer at all.
--
Chris (TeamB);
Zhan Weiming
2008-06-21 06:10:15 UTC
Permalink
Post by Chris Uzdavinis (TeamB)
Perhaps you left out something important to the problem you're
describing?
Possible, I'll try to provide more information as much as possible.
Post by Chris Uzdavinis (TeamB)
Holding smart pointers will solve this problem.
From boost.org: "a shared_ptr cannot correctly hold a pointer to a
dynamically allocated array." Seems just my case. the pointer (pb) points to
a vector, is it a dynamically allocated array?
Post by Chris Uzdavinis (TeamB)
Post by Zhan Weiming
MainLoop()
{
do
{ //insert one object to c
InsertOneRecord();
ok so far.
Post by Zhan Weiming
//take some actions
Perhaps you're using a lot of memory in some of these actions?
these actions related (I think) include:
AppendRecord(unsigned int index)
{
GetB(b);
for(unsigned int i=0;i<b.size();i++)
{
c[index]->push_back(b[i]);
}
}
Post by Chris Uzdavinis (TeamB)
Post by Zhan Weiming
//erase object in c
DeleteOneItem(idx); //expect to erase memory alloacted.
after running for some time, it raised "out of memory".
Perhaps your vectors are simply growing too much?
Perhaps the rest of your application is doing something with a lot of
memory?
Also, it'd be useful if you gave a description of how this is used.
As fast as the CPU can go, or just every now and then? How big are
the vectors anyway? How much data are you trying to store? Etc.
Take a example to describe what my application is doing: in a telephony
network, every time a phone call is made, some information will be hold in
memory (in vector "c"); and when the call is hangup, the information is
exported to a file and the memory is released (I expect). Then you can see
that the memory is just hold for the information of the ongoing calls (and
the memory on my PC is big enough to hold this part), if the memory is
released as expected.

for a relative short time recording, say, 3 hours, it is OK. But for 6
hours, it is out of memory, every time.

I think the point is : in the DeleteOneItem(), is the memory allocated to
the vector freed?

Thanks,
Zhan
Alan Bellingham
2008-06-21 11:54:59 UTC
Permalink
Post by Zhan Weiming
From boost.org: "a shared_ptr cannot correctly hold a pointer to a
dynamically allocated array." Seems just my case. the pointer (pb) points to
a vector, is it a dynamically allocated array?
A vector is not an array. It may contain a pointer to an array
internally, but is itself an object.

The warning was because the shared_ptr uses delete rather than delete[]
when disposing of its content. For an actual smart pointer to a raw
array, you would need to use boost::shared_array<>.

Alan Bellingham
--
Team Browns
<url:http://www.borland.com/newsgroups/> Borland newsgroup descriptions
<url:http://www.borland.com/newsgroups/netiquette.html> netiquette
Chris Uzdavinis (TeamB)
2008-06-23 13:22:56 UTC
Permalink
Post by Zhan Weiming
Post by Chris Uzdavinis (TeamB)
Perhaps you left out something important to the problem you're
describing?
Possible, I'll try to provide more information as much as possible.
Post by Chris Uzdavinis (TeamB)
Holding smart pointers will solve this problem.
From boost.org: "a shared_ptr cannot correctly hold a pointer to a
dynamically allocated array." Seems just my case. the pointer (pb)
points to a vector, is it a dynamically allocated array?
No. A vector is a single object. An array is allocated with operator
new[], which doesn't work with shared_ptr because its default deleter
uses operator delete. (And allocated arrays require operator delete[]
in order to function properly.)
Post by Zhan Weiming
I think the point is : in the DeleteOneItem(), is the memory allocated
to the vector freed?
Vectors manage memory internally using an allocator, which may use
pooling. STL containers have a concept of memory allocated and also
memory used (some subset of the allocated memory.) When you have a
big vector become small, it may continue to hold onto its memory as an
optimization for future writes (as vector inserts are slow ( O(n) )
when a vector buffer must resize.

As such, if your vector itself gets big then small and then hangs
around for a long time, it may still be internally using memory as if
it were still big.

If you actually destroy the vector, the memory is of course released.
This behavior typically only happens when the vector itself remains
for a long time, but its *elements* are coming and going.

If the size of this vector is slowing getting bigger and bigger (for
the high-watermark) then you may also be running into fragmented
memory issues, such that it cannot reuse any pre-allocated memory
because it's too small and not contiguous enough for the larger vector
requirements.

If that's the case, you might try reserving a really big amount of
memory up front, if you can estimate the worst case scenario that you
expect to handle, and add say 10%, then you might avoid all the memory
consumption of resizing and memory fragmentation.

Just some ideas.
--
Chris (TeamB);
Zhan Weiming
2008-06-25 03:46:52 UTC
Permalink
Post by Chris Uzdavinis (TeamB)
Vectors manage memory internally using an allocator, which may use
pooling. STL containers have a concept of memory allocated and also
memory used (some subset of the allocated memory.) When you have a
big vector become small, it may continue to hold onto its memory as an
optimization for future writes (as vector inserts are slow ( O(n) )
when a vector buffer must resize.
As such, if your vector itself gets big then small and then hangs
around for a long time, it may still be internally using memory as if
it were still big.
If you actually destroy the vector, the memory is of course released.
This behavior typically only happens when the vector itself remains
for a long time, but its *elements* are coming and going.
If the size of this vector is slowing getting bigger and bigger (for
the high-watermark) then you may also be running into fragmented
memory issues, such that it cannot reuse any pre-allocated memory
because it's too small and not contiguous enough for the larger vector
requirements.
If that's the case, you might try reserving a really big amount of
memory up front, if you can estimate the worst case scenario that you
expect to handle, and add say 10%, then you might avoid all the memory
consumption of resizing and memory fragmentation.
to describe my questions more clear, I copy the definition here again.

//------------------------------------
struct
{
int x;
int y;
}TypeA;
typedef vector<TypeA> TypeB;
typedef vector<TypeB*> TypeC;

TypeA a;
TypeB b;
TypeC c;
//------------------------------------

My assumption:

Since the elements of vector c are pointers, it means it holds N*4 bytes
contiguous memory.
where N is the size of c. (not too much)

Each vector that pointer in c referred to holds M*b bytes contiguous memory,
where M is the size of b.
But the memory of the vectors that the pointers in c referred to are not
necessarily contiguous.

Am I right?

Zhan
Thomas Maeder [TeamB]
2008-06-25 06:39:04 UTC
Permalink
Post by Zhan Weiming
//------------------------------------
struct
{
int x;
int y;
}TypeA;
typedef vector<TypeA> TypeB;
typedef vector<TypeB*> TypeC;
TypeA a;
TypeB b;
TypeC c;
//------------------------------------
Since the elements of vector c are pointers, it means it holds N*4
bytes contiguous memory.
where N is the size of c. (not too much)
Each vector that pointer in c referred to holds M*b bytes contiguous
memory, where M is the size of b.
But the memory of the vectors that the pointers in c referred to are
not necessarily contiguous.
Am I right?
Yes. If m0 is the memory dynamically allocated by *c[0] to hold its
TypeA elements, and m0 is the memory dynamically allocated by *c[1] to
hold its TypeA elements, then each of m0 and m1 is a contiguous block
by itself, but m0 and m1 are not necessarily neighboring areas.
Zhan Weiming
2008-06-28 13:26:03 UTC
Permalink
Post by Chris Uzdavinis (TeamB)
How do you know that this vector memory management is the actual
source of the problem?
You doubt is correct. After checking again, I am sorry to find out the "out
of memory" problem is caused by the phenomenon that memory is not released
after the the records written to database:

void WriteRecords()
{
/*
TDAODatabase * MyExportDatabase;
TDAOTable * MyExportTable;

//where TDAODatabase and TDAOTable are the Data Access components to
access Microsoft Access databases. (Diamond Access)
*/

if(!MyExportDatabase->Connected)
MyExportDatabase->Open();

MyExportTable->Active = false;
MyExportTable->TableName = TableName;
MyExportTable->Active = true;

for(int i=0;i<RECORDCOUNT;++i)
{
//fill the record that is not empty, not exported, and not ongoing
MyExportTable->Append();

//write one record in memory to MyExportTable.

MyExportTable->Post();
}

//release memory holding the records.
ReleaseMemoryHoldingRecords();
}

WriteRecords() will be executed many times whenever the records reach
RECORDCOUNT. If I comment out everything but the last sentence, the memory
is increased before WriteRecords() and decreased to the same low level after
each executing WriteRecords(). But if I don't comment out, the memory is
decreased after WriteRecords() but getting higher than that after last time
WriteRecords() executed, and finally "out of memory". That means
Append()/Post() consume memories.

Maybe I should ask for help from the Data Access components provider.

Thanks.
Zhan
Zhan Weiming
2008-06-30 08:45:44 UTC
Permalink
Post by Zhan Weiming
Post by Chris Uzdavinis (TeamB)
How do you know that this vector memory management is the actual
source of the problem?
You doubt is correct. After checking again, I am sorry to find out the
"out of memory" problem is caused by the phenomenon that memory is not
void WriteRecords()
{
/*
TDAODatabase * MyExportDatabase;
TDAOTable * MyExportTable;
//where TDAODatabase and TDAOTable are the Data Access components
to access Microsoft Access databases. (Diamond Access)
*/
if(!MyExportDatabase->Connected)
MyExportDatabase->Open();
MyExportTable->Active = false;
MyExportTable->TableName = TableName;
MyExportTable->Active = true;
for(int i=0;i<RECORDCOUNT;++i)
{
//fill the record that is not empty, not exported, and not ongoing
MyExportTable->Append();
//write one record in memory to MyExportTable.
MyExportTable->Post();
}
//release memory holding the records.
ReleaseMemoryHoldingRecords();
}
WriteRecords() will be executed many times whenever the records reach
RECORDCOUNT. If I comment out everything but the last sentence, the memory
is increased before WriteRecords() and decreased to the same low level
after each executing WriteRecords(). But if I don't comment out, the
memory is decreased after WriteRecords() but getting higher than that
after last time WriteRecords() executed, and finally "out of memory". That
means Append()/Post() consume memories.
Maybe I should ask for help from the Data Access components provider.
Things are now getting more complex.

class RecordInputer
{
private:
bool Enabled;
int RecordCount;
vector<RecordType*> Records;
void __fastcall ReleaseMemoryHoldingRecords();
public:
void __fastcall WriteRecords();
}

In my application, there could be several instances of class RecordInputer.
(They are referred by pointers in a TList) During running, if Enabled, items
are added to Records, and RecordCount is increasing. When the RecordCount
reach a limit, say, 50000, WriteRecords() is called.

Strange is: When only one instance is enabled, things are going fine, i.e.
every time after WriteRecords(), the memory level drops to a relative
constant value. But if more than one instance are enabled, the memory level
doesn't drop after WriteRecords().

The lines but the last line in WriteRecords are commented out, i.e. only
ReleaseMemoryHoldingRecords() is called in WriteRecords() and now it has
been changed using auto_ptr.

RecordInputer::ReleaseMemoryHoldingRecords()
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);

Records.clear();
}

The memory level is watched using Windows Task Manager.

Any ideas?

Thanks.
Thomas Maeder [TeamB]
2008-06-30 15:35:28 UTC
Permalink
Post by Zhan Weiming
class RecordInputer
{
bool Enabled;
int RecordCount;
vector<RecordType*> Records;
void __fastcall ReleaseMemoryHoldingRecords();
void __fastcall WriteRecords();
}
RecordInputer::ReleaseMemoryHoldingRecords()
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);
Records.clear();
}
The memory level is watched using Windows Task Manager.
Any ideas?
Yes.

1) Task Manager is not a very useful tool for this purpose.

2) clear() may simply destruct the elements and reset the end pointer,
without releasing any memory at all.

Try

vector<RecordType*>().swap(Records)

instead. This will reset Record's guts to the state of a default
constructed vector and release the memory so far occupied by Records.
Zhan Weiming
2008-06-30 17:28:13 UTC
Permalink
Thanks for your reply.
Post by Thomas Maeder [TeamB]
Post by Zhan Weiming
RecordInputer::ReleaseMemoryHoldingRecords()
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);
Records.clear();
}
2) clear() may simply destruct the elements and reset the end pointer,
without releasing any memory at all.
Since Records hold pointers, which means 4 bytes for each item. That is
not too much.

The memory that I want to free are the ones the pointers in Records
referred to. They are also
vectors and they occupy lots of memory.

Does std::auto_ptr<RecordType> killer(Records[i]) free the the vector
referred by Records[i]?
Seems yes (at least sometimes. see below).
Post by Thomas Maeder [TeamB]
Try
vector<RecordType*>().swap(Records)
instead. This will reset Record's guts to the state of a default
constructed vector and release the memory so far occupied by Records.
Tried, but got the same results, i.e. when only one RecordInputer is
enabled, the memory is released, and
if more than one, doesn't work.

Zhan
Thomas Maeder [TeamB]
2008-07-01 15:33:25 UTC
Permalink
Post by Zhan Weiming
Post by Thomas Maeder [TeamB]
Post by Zhan Weiming
RecordInputer::ReleaseMemoryHoldingRecords()
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);
Records.clear();
}
2) clear() may simply destruct the elements and reset the end pointer,
without releasing any memory at all.
Since Records hold pointers, which means 4 bytes for each item. That
is not too much.
The memory that I want to free are the ones the pointers in Records
referred to.
I had assumed that he auto_ptr acrobatics you are doing before clear()
would take care of that. Does it? I.e. is the RecordType destructor
invoked as many times as it should?
Chris Uzdavinis (TeamB)
2008-07-01 16:27:23 UTC
Permalink
Post by Zhan Weiming
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);
Records.clear();
}
To OP:

I recognize "killer" here as something derived from a code example I
gave. While it's correct in terms of behavior, it's a bit more
complex than is necessary:

For something like this, where there is no scope over which you want
to keep something alive but be sure that it will *eventually* be
deleted before exiting the scope, just use delete directly:

for(int i=0;i<Records.size();i++)
delete Records[i];

But even better, IMHO, is the use of smart pointers, as they will work
even if you forget to do a loop over your items, deleting them.
--
Chris (TeamB);
Bob Gonder
2008-07-01 19:32:37 UTC
Permalink
Post by Chris Uzdavinis (TeamB)
But even better, IMHO, is the use of smart pointers, as they will work
even if you forget to do a loop over your items, deleting them.
I think that's what he thought he was doing.
What I think you are saying is (and the syntax may be a bit off)

Replace
vector<RecordType*> Records;
with
std::vector< std::auto_ptr<RecordType> > Records;

I'm thinking a typedef might be nice too.
Alex Bakaev [TeamB]
2008-07-01 19:57:10 UTC
Permalink
Post by Bob Gonder
std::vector< std::auto_ptr<RecordType> > Records;
I think it should be

std::vector <boost::shared_ptr <RecordType> > Records;
Alan Bellingham
2008-07-01 21:50:45 UTC
Permalink
Post by Bob Gonder
std::vector< std::auto_ptr<RecordType> > Records;
The C++ Standards Committee actually hope that that won't compile. With
a large number of compilers, they're right.

(Alex shows the right way.)

Alan Bellingham
--
Team Browns
<url:http://www.borland.com/newsgroups/> Borland newsgroup descriptions
<url:http://www.borland.com/newsgroups/netiquette.html> netiquette
Chris Uzdavinis (TeamB)
2008-07-01 23:55:40 UTC
Permalink
Post by Bob Gonder
Post by Chris Uzdavinis (TeamB)
But even better, IMHO, is the use of smart pointers, as they will work
even if you forget to do a loop over your items, deleting them.
I think that's what he thought he was doing.
What I think you are saying is (and the syntax may be a bit off)
Replace
vector<RecordType*> Records;
with
std::vector< std::auto_ptr<RecordType> > Records;
Good idea, but bad choice for smart pointer. auto_ptr (soon to be
deprecated, BTW) is not a valid member of an stl container, and has
undefined behavior in it. The copy-semantics are incompatible with
stl requirements.

With temporary variables possibly being initialized inside the vector
during resizing memory, or in algorithms, and so on, it's entirely
possible that the ownership could slip slide away from the vector and
into (and out with) the temporary variable.
--
Chris (TeamB);
Zhan Weiming
2008-07-03 17:46:09 UTC
Permalink
Post by Zhan Weiming
Post by Zhan Weiming
{
for(int i=0;i<Records.size();i++)
std::auto_ptr<RecordType> killer(Records[i]);
Records.clear();
}
for(int i=0;i<Records.size();i++)
delete Records[i];
No matter auto_ptr stuff or delete, the result is the same.

Here is the more detailed (still simplified) code:

//-----------------------------------------------
#define DB_STRING_CELL_SIZE 40

typedef struct
{
char FieldIndex;
char CharValue[DB_STRING_CELL_SIZE+1];
int Value;
int DecValue;
}DB_CellType;

typedef std::vector<DB_CellType> RecordType;

class RecordInputer
{
private:
DB_CellType cellEntry;
std::vector<RecordType*> Records;
void __fastcall ReleaseMemoryHoldingRecords();
int RecordCount;
bool Enabled;
public:
__fastcall RecordInputer(){ RecordCount = 0 }
bool IsEnabled(){ return Enabled;}
int GetRecordCount(){ return RecordCount;}
void __fastcall CreateRecord();
void __fastcall FillRecord (unsigned int index);
void __fastcall CompleteRecord (unsigned int index);
void __fastcall WriteRecords(){ ReleaseMemoryHoldingRecords();} //no
writing, just release memory for testing.
};

RecordInputer::ReleaseMemoryHoldingRecords()
{
for(int i=0;i<Records.size();i++)
delete (Records[i]);

Records.clear();

RecordCount = 0;
}

RecordInputer::CreateRecord()
{
DB_CellType *pRecord = new DB_CellType;
Records.push_back(pRecord);
}

RecordInputer::FillRecord (unsigned int index)
{
// a loop to fill a record
while(!finish_once_record_filling)
{
CellEntry.FieldIndex = field_index;
CellEtnry.Value = value;
Records[index].push_back(cellEntry)
}
}

RecordInputer::FillRecord ()
{
if(a_record_completed)
{
++RecordCount;
}
}

class DataHandler
{
public:
TList * RecordInputerList; //contains the pointer referring to
RecordInputer
void mainloop();
... ...
}

void DataHander::mainloop()
{
do
{
for(int i=0;i<RecordInputerList->Count;i++)
{
RecordInputer * pRI = (RecordInputer
*)RecordInputerList->Items[i];

if(!pRI->IsEnabled())
continue;

if(pRI->GetRecordCount()==MaxRecordCount)
{
pRI->WriteRecords();
}

if(condition_for_creating_a_record)
pRI->CreateRecord();

if(condition_for_fill_record_N)
pRI->FillRecord(N);

}

}while(!all_records_proceeded)
}
//------------------------------------------------------------------------------------

What confuse me is: if there is only one RecordInputer is enalbed, the
ReleaseMemoryHoldingRecords
seems working (I can see from the Windows Task Manager that the used memory
dropped).
But if more than one, after ReleaseMemoryHoldingRecords, used memory just
does not change!
Zhan Weiming
2008-07-03 17:57:40 UTC
Permalink
sorry, CompleteRecord wrong and is missing in the mainloop.
Post by Zhan Weiming
void __fastcall CompleteRecord (unsigned int index);
void __fastcall CompleteOneRecord ();
Post by Zhan Weiming
RecordInputer::FillOneRecord ()
{
if(a_record_completed)
{
++RecordCount;
}
}
RecordInputer::CompleteOneRecord ()
{
... ...
}
Post by Zhan Weiming
void DataHander::mainloop()
{
do
{
for(int i=0;i<RecordInputerList->Count;i++)
{
if(condition_for_fill_record_N)
pRI->FillRecord(N);
//additonal,
if(condition_for_one_record_complete)
{
pRI->CompleteOneRecord ();
}
Post by Zhan Weiming
}
}while(!all_records_proceeded)
}
//------------------------------------------------------------------------------------
Bob Gonder
2008-07-04 01:36:35 UTC
Permalink
Post by Zhan Weiming
typedef struct
{
}DB_CellType;
typedef std::vector<DB_CellType> RecordType;
class RecordInputer
{ std::vector<RecordType*> Records;
RecordInputer::CreateRecord()
{
DB_CellType *pRecord = new DB_CellType;
Records.push_back(pRecord);
Does that compile?
Record is a vector of pointers to vectors of objects of DB_CellType type.
Shouldn't you be creating a RecordType instead?
Post by Zhan Weiming
What confuse me is: if there is only one RecordInputer is enalbed, the
ReleaseMemoryHoldingRecords
seems working (I can see from the Windows Task Manager that the used memory
dropped).
But if more than one, after ReleaseMemoryHoldingRecords, used memory just
does not change!
TM only know how much the OS has given the manager,
not how much the manger has given the application.

Often the memory manager holds onto memory so it
can reuse it without bothering the OS so often.

Try using GlobalMemoryStatus() to check memory usage.
Or GetProcessMemoryInfo()
http://msdn.microsoft.com/en-us/library/ms683219(VS.85).aspx

Or, if it really bothers you, check out FastMM4 on Google.
I think it has debug outputs that show actual run-time values.
http://sourceforge.net/projects/fastmm/
It is primarily Delphi, but has a (short) note for CPP use.
Zhan Weiming
2008-07-04 02:38:52 UTC
Permalink
Post by Bob Gonder
Post by Zhan Weiming
RecordInputer::CreateRecord()
{
DB_CellType *pRecord = new DB_CellType;
Records.push_back(pRecord);
Does that compile?
Record is a vector of pointers to vectors of objects of DB_CellType type.
Shouldn't you be creating a RecordType instead?
sorry,it was RecordType.
Post by Bob Gonder
TM only know how much the OS has given the manager,
not how much the manger has given the application.
Often the memory manager holds onto memory so it
can reuse it without bothering the OS so often.
Try using GlobalMemoryStatus() to check memory usage.
Or GetProcessMemoryInfo()
http://msdn.microsoft.com/en-us/library/ms683219(VS.85).aspx
Or, if it really bothers you, check out FastMM4 on Google.
I think it has debug outputs that show actual run-time values.
http://sourceforge.net/projects/fastmm/
It is primarily Delphi, but has a (short) note for CPP use.
I'll check that.

Thank you.

Loading...