6.0.0-beta1
7/16/25

[#2315] Provide VFS db drivers that spool blobs using streams
Summary Provide VFS db drivers that spool blobs using streams
Queue Horde Framework Packages
Queue Version HEAD
Type Enhancement
State Duplicate
Priority 1. Low
Owners
Requester nomis80 (at) lqt (dot) ca
Created 07/21/2005 (7300 days ago)
Due
Updated 11/09/2008 (6093 days ago)
Assigned 08/31/2005 (7259 days ago)
Resolved 11/09/2008 (6093 days ago)
Milestone Horde 4.0
Patch No

History
11/09/2008 01:28:27 AM Chuck Hagenbuch Comment #18
State ⇒ Duplicate
Reply to this comment
Deprecating in favor of #6992.
05/12/2007 08:48:48 PM Chuck Hagenbuch State ⇒ Stalled
 
12/01/2005 11:24:10 PM Chuck Hagenbuch Taken from Michael Slusarz
 
12/01/2005 11:23:56 PM Chuck Hagenbuch Summary ⇒ Provide VFS db drivers that spool blobs using streams
 
12/01/2005 11:23:33 PM Chuck Hagenbuch Comment #17
Version ⇒ HEAD
Queue ⇒ Horde Framework Packages
Type ⇒ Enhancement
State ⇒ Accepted
Priority ⇒ 1. Low
Reply to this comment
Moving to a VFS ticket
12/01/2005 11:21:22 PM Jan Schneider Comment #16 Reply to this comment
IIRC MDB2 supports BLOB access through streams.
12/01/2005 11:11:03 PM Chuck Hagenbuch Comment #15 Reply to this comment
Going to leave this stalled for now; if at some point we move to a 
database backend based on PDO, or something else that takes a stream 
for BLOBs instead of requiring us to load the file into memory, we can 
make the db driver much more efficient.
11/15/2005 07:11:23 PM rsantos (at) ruisantos (dot) com Comment #14 Reply to this comment
I do not know exactly what you mean by VFS Backend by i'm assuming 
it's either the horde framework version or the 'If a VFS (virtual 
filesystem) backend is required, which one should we use?' option in 
the 'Virtual File Storage' tab under horde configuration.



Horde framework in v3.0.3

'If a VFS (virtual filesystem) backend is required, which one should 
we use?' is SQL Database with Horde Defaults.



The database i use is MySQL with TCP/IP.
11/15/2005 07:05:27 PM Chuck Hagenbuch Comment #13 Reply to this comment
So to be clear, this only happens for you in IMP, and only using VFS? 
It's possible that we could modify the VFS library to be more memory 
efficient. Which VFS backend are you using?
11/15/2005 07:02:44 PM rsantos (at) ruisantos (dot) com Comment #12 Reply to this comment
I forgot to tell you all that, if you deactivate the 'Should we use 
the Horde VFS system for storing uploaded attachments?' option under 
the 'Compose' tab of IMP configuration, all works well...



Will have to find out what is that VFS thingy...



Hope it helps someone...


11/15/2005 06:29:59 PM rsantos (at) ruisantos (dot) com Comment #11 Reply to this comment
I have the same problem as nomis80 (at) lqt (dot) ca.



But i can add something... I only have those problems with attachments 
on imp. I also use gollem and it accepts files up to the size it 
should be accepting them.



If anyone requires any configuration details, just ask...


09/17/2005 07:47:07 AM Jan Schneider State ⇒ Stalled
 
09/01/2005 03:52:01 AM Michael Slusarz Comment #10 Reply to this comment
As mentioned previously I can't reproduce this so until someone 
produces a patch, this is just going to sit here.
08/31/2005 12:09:31 PM nomis80 (at) lqt (dot) ca Comment #9 Reply to this comment
Well, the bug isn't fixed so I guess it shouldn't be closed.
08/31/2005 09:19:11 AM Jan Schneider Comment #8
Assigned to Michael Slusarz
State ⇒ Assigned
Reply to this comment
Is there anything to add to this discussion, or can the ticket be closed?
07/29/2005 01:22:18 PM nomis80 (at) lqt (dot) ca Comment #7 Reply to this comment
=== OFF TOPIC, JUMP DOWN IF NOT INTERESTED ===
I really am not sure what I said to set you off so much.  I was not
trying to be condescending, and I am sorry you took it that way.
Let me explain by rephrasing and condensing your initial reply. I 
submitted a bug and you said approximately the following: "Even though 
I haven't tested it, I don't think this bug is real. And you haven't 
done your homework because it's in plain sight in the mailing list 
archives and in the bug index, but I haven't taken the time to verify 
that claim. Please don't bother us again, we're busy." Don't you 
understand now?



Let me take this further and explain to you how you *should* have 
replied, so that you may learn from this experience. (Side note: Yeah, 
I am being condescending on purpose, but in a sarcastic way.)



First, before any reply, if you think this bug has been discussed 
before you absolutely need to search for the URL. If you can't be 
bothered to do that you just can't reply telling me that this bug has 
already been discussed. That would be considered rude. Let other 
people search for yourself instead and see the bug closed without your 
intervention. If you think it is so evident that the bug has already 
been discussed, it is so much in plain sight, it shouldn't be hard for 
you to find a URL. Taking the time to find the URL also has the 
advantage of preventing you from being wrong, as you were with me. 
When you give a URL you have proof of your claim: the bug really has 
been discussed before.



Assuming you have determined that the bug hasn't been discussed 
before, the second step is, still before any reply, trying to 
reproduce that bug, assuming you have any interest in fixing it. If 
you don't have any interest in fixing it then you can just ignore the 
bug, other people will try. If you can't reproduce it, you reply 
something along the lines of "Couldn't reproduce. Test setup: mysql 
X.Y.Z, IMP 3.X, Apache X.", you set the state to feedback and you wait 
for more info. If you can reproduce it, then you may confirm the bug. 
Assuming you still are interested in fixing it, you may proceed to 
actually fixing it.



Through all these steps, if you are not interested in fixing the bug, 
just leave it alone. Other people will be interested.
As far as providing a URL where you could find information -
unfortunately, I am not a paid support agent of Horde.  I would
really love to help everyone, and if I had a URL close at hand I
would have given it.  Instead I suggested a place you could search.
And by doing this you took a risk: maybe you were wrong, and such a 
URL didn't exist. And this time, you were indeed wrong. Another 
guideline: don't suggest places where one can search. That's 
condescending. It's exactly as if I explained to you the initial steps 
in the bug fixing process (oops, I already did that it seems).



=== BACK TO ON TOPIC STUFF ===
First of all, internet mail is
not the place to be sending around 10 MB files but this religious
argument has already taken place on the mailing lists and I will not
address it anymore here.
That's not the point, but I agree: let's not be religious.
Second, I can send 10 MB files without PHP
using anywhere near 320 MB of memory so it is clearly not
reproducible for everyone.
AH! New information! Thanks for finally trying to reproduce the bug. 
Can you elaborate on what versions were used in your test setup? Were 
you using the VFS for storing attachments? Let's try to isolate the bug.
Third, this *is* a PHP issue as much as
you don't believe it is.
I think you misunderstand what this bug is about. This is not about "I 
can't send attachments of size N or bigger". It is about "I can't send 
attachments of the size IMP has been configured to accept". If I set 
the maximum to 10M, I can send attachments up to about 1M. If I set 
the maximum to 2M, it's about 500K. As I go smaller, the maximum size 
goes smaller too. But the actual limit is never equal to what IMP has 
been setup to accept. That's the problem.



Analogy: the speedometer on my car tells me I'm going at 100 km/h but 
I'm really at 50. Then I go down to 50, it tells me I'm at 25. That's 
a calibration problem, and it's the same kind of problem that this bug 
is about.
07/29/2005 05:06:05 AM Michael Slusarz Comment #6
State ⇒ Feedback
Priority ⇒ 1. Low
Reply to this comment
I really am not sure what I said to set you off so much.  I was not 
trying to be condescending, and I am sorry you took it that way.



As far as providing a URL where you could find information - 
unfortunately, I am not a paid support agent of Horde.  I would really 
love to help everyone, and if I had a URL close at hand I would have 
given it.  Instead I suggested a place you could search.



As for your statement that it is obviously IMP's fault for using too 
much memory, I respectfully disagree.  First of all, internet mail is 
not the place to be sending around 10 MB files but this religious 
argument has already taken place on the mailing lists and I will not 
address it anymore here.  Second, I can send 10 MB files without PHP 
using anywhere near 320 MB of memory so it is clearly not reproducible 
for everyone.  Third, this *is* a PHP issue as much as you don't 
believe it is.  IMP has absolutely NO control over how we retrieve the 
data from the uploaded file, for example, since we are stuck with 
functions PHP provides to do this.  When we are encoding the file this 
is done via a PHP function - we have no control over the memory usage 
in this function either.  Swapping data to disk to conserve memory is 
not really an option since this really isn't the job of userland code. 
  Once again, if you search discussions on the mailing lists you will 
find that this topic has come up before and the consensus is that PHP 
is pretty darn liberal when it comes to allocating memory (I think 
this is one of the items they were shooting to improve on in PHP 5).



I don't doubt that we could make some improvements but simply 
complaining about the issue without knowing the underlying facts, or 
offering to submit improvements, isn't going to help us out much.
07/26/2005 05:24:23 PM nomis80 (at) lqt (dot) ca Comment #5 Reply to this comment
are you using mySQL to store
your attachments?
Yes.
If so, you have not searched either the bug index
or the mailing list
I don't understand your reasoning. I do store my attachments in MySQL, 
and I did search the mailing list and the bug index.
where the issue has been addressed numerous times.
Have you heard of URLs? They're very useful for *locating* stuff, you 
know. Maybe you could give one to me, that would be helpful.
Or have you run test.php, like docs/INSTALL tells you to, and it
clearly states that memory_limit should be DISABLED.
Hahaha, sure. Please, try to understand the problem. I want to send a 
10M attachment, and IMP is already eating 320M and the disk is 
trashing to swap. You really think removing the limit would make IMP 
not consume that much memory? The problem is not that the limit is too 
low, it's that IMP uses *too much memory*.
As for PHP usage, we can try to do as much as possible to free
memory, but we are limited as to how PHP allocates memory.  So if you
are having major issues with PHP's memory usage, you need to bug the
PHP folks.
Yeah, sure, it's a bug in PHP. And maybe you'll be interested in 
hearing about the memory leak in malloc() that I found this morning. 
And I've found a security flaw in "rm" the other day.



IMP really does eat too much memory when sending attachments, and 
maybe if you had replied something like "could not reproduce" then I 
would have believed you more than you simply dismissing my bug report 
because you're too narrow minded to believe that there could be a tiny 
chance that I really did witness that bug, spent countless hours 
measuring, tweaking, profiling and generally hacking, searched the 
documentation, mailing list archives, bug index, and finally, out of 
desperation, resigned to file a bug.
07/26/2005 04:02:37 PM Michael Slusarz Comment #4 Reply to this comment
For the suggestion that this is a bug, are you using mySQL to store 
your attachments?  If so, you have not searched either the bug index 
or the mailing list where the issue has been addressed numerous times.



Or have you run test.php, like docs/INSTALL tells you to, and it 
clearly states that memory_limit should be DISABLED.



As for PHP usage, we can try to do as much as possible to free memory, 
but we are limited as to how PHP allocates memory.  So if you are 
having major issues with PHP's memory usage, you need to bug the PHP 
folks.


07/25/2005 12:40:47 PM nomis80 (at) lqt (dot) ca Comment #3 Reply to this comment
This is clearly a BUG and not an ENHANCEMENT because with default PHP, 
Horde and IMP configurations, attachments bigger than ~640KB can't be 
uploaded.



upload_max_filesize = 2M

memory_limit = 8M

post_max_size = 8M
07/22/2005 11:07:38 PM jigermano (at) uolsinectis (dot) com (dot) ar Comment #2 Reply to this comment
I would like to make a comment on this since I have been working today

in this very issue. I had the memory limit on 120Mb. It was

exhausted trying to upload a 10Mb attachment. The max_allowed_packet was

too 10Mb and the actual query being received by mysql was a little

bigger than that, and that's when things go really wrong.

VFS uses DB. Right after executing mysql_query (DB/mysql.php), -failing

because of the max_allowed_packet thing-,

mysql::mysqlRaiseError is being called, which in turn calls

DB_Common::raiseError. Now, DB has a propery called last_query, which as

you may guess is the last executed query. Here is

a snippet of DB_Common::raiseError:



         if ($userinfo === null) {

             $userinfo = $this->last_query;

         }

// ...

$tmp = PEAR::raiseError(null, $code, $mode, $options, $userinfo,

                                 'DB_Error', true);

         return $tmp;



Is it clear that last_query is more than 10Mb? The object returned includes

the hole query, and is returned to compose (in my case, at this point

80Mb was being used). Now, I had error_reporting == E_ALL. So the query

was passed to the logger (by value) and it then reaches the memory

limit. I changed error_reporting to E_WARNING (horde/config/conf.php).

Now, if I modified max_allow_packet so no error would happen, there would

be no problem. But have in mind that any kind of error -that is, DB

related-, would cause this same thing. So I also changed

DB_common::raiseError so the query would not be included in the report

for the error.

The peak memory usage reported by xdebug with the 10Mb attachment at

this point (no DB error, error_reporting==E_WARNING) is 58302568 bytes

(55Mb), which I still think is a little high, but I haven't looked into

it yet.



In the issue of the logging level, I complete understand if someone

would argue that if you use E_ALL you are responsable for having enough

memory to handle it, being that this is coded in VFS and DB.



There are other things, though, to worry about related to attachments

and memory: what happens if I upload N files, all below the

post_max_size, max_allowed_packet, upload_max_filesize and conf.php's

max_attachment_size and the send the message? Memory exhausted (and

maybe a message from your MTA saying file is too big). Thing is,

max_attachment_size is per attachment. So you can actually upload as

many files as you want.



What happens if you are writting an email and you have uploaded some

files (which have acceptable filesize) and you try to upload a file with

filesize larger thant post_max_size? PHP rejects the whole post so

$_POST is completly empty, an so it will your compose window (since the

state info was in the post).



Some, most, and maybe all of this stuff do not fall in the BUG categry,

it might be necesary an ENHACEMENT ticket for the number of attachments,

and maybe to try to find a solution to the post_max_size problem. The

POST goes to the filesystem, and only the no-file fields are actually

sent to PHP along with the info of the uploaded files, so I guess

post_max_size could be huge granted there is good control afterwards.



I guess this is more dev-list stuff, but I started comment on the bug

and I got carried away. Really, sorry for the long post.


07/21/2005 02:20:06 PM nomis80 (at) lqt (dot) ca Comment #1
Priority ⇒ 2. Medium
State ⇒ Unconfirmed
Queue ⇒ IMP
Type ⇒ Bug
Summary ⇒ Uploading attachment uses too much memory
Reply to this comment
I wanted to permit 10MB attachments so I changed these values in php.ini:



upload_max_filesize = 10M

memory_limit = 40M

post_max_size = 11M



I figured that since memory_limit was 8M when upload_max_filesize was 
2M (default values on RHEL 4), I should scale memory_limit by the same 
factor as upload_max_filesize to be sure that it would work. I use a 
post_max_size one meg bigger than upload_max_filesize to allow for 
HTTP and encoding overhead.



I also increased the max_packet_size of MySQL 4.1.10a to 11M, one meg 
more to allow for packet overhead.



I generated an attachment file of exactly 10MB using this command:



$ dd if=/dev/zero of=attach bs=$((1024*1024*10)) count=1

$ ls -l attach

-rw-rw-r--  1 nomis80 nomis80 10485760 Jul 21 09:54 attach



When I uploaded the file the first time, I hit the memory_limit 
according to this error message:



[client 10.10.1.153] PHP Fatal error:  Allowed memory size of 41943040 
bytes exhausted (tried to allocate 20971522 bytes) in 
/usr/share/pear/DB/common.php on line 356, referer: 
http://10.10.1.202/webmail/imp/compose.php?uniq=1121955430742

Allowed memory size of 41943040 bytes exhausted (tried to allocate 0 bytes)



I proceeded to increase the memory_limit by a factor of two every time 
I received that error. Soon, I was at 320M, and this was causing my 
server to trash while writing to disk swap memory. I could not figure 
out exactly at which value I had to set memory_limit for the system to 
accept my attachment.

Saved Queries