Summary | Provide VFS db drivers that spool blobs using streams |
Queue | Horde Framework Packages |
Queue Version | HEAD |
Type | Enhancement |
State | Duplicate |
Priority | 1. Low |
Owners | |
Requester | nomis80 (at) lqt (dot) ca |
Created | 07/21/2005 (7300 days ago) |
Due | |
Updated | 11/09/2008 (6093 days ago) |
Assigned | 08/31/2005 (7259 days ago) |
Resolved | 11/09/2008 (6093 days ago) |
Milestone | Horde 4.0 |
Patch | No |
State ⇒ Duplicate
Version ⇒ HEAD
Queue ⇒ Horde Framework Packages
Type ⇒ Enhancement
State ⇒ Accepted
Priority ⇒ 1. Low
database backend based on PDO, or something else that takes a stream
for BLOBs instead of requiring us to load the file into memory, we can
make the db driver much more efficient.
it's either the horde framework version or the 'If a VFS (virtual
filesystem) backend is required, which one should we use?' option in
the 'Virtual File Storage' tab under horde configuration.
Horde framework in v3.0.3
'If a VFS (virtual filesystem) backend is required, which one should
we use?' is SQL Database with Horde Defaults.
The database i use is MySQL with TCP/IP.
It's possible that we could modify the VFS library to be more memory
efficient. Which VFS backend are you using?
the Horde VFS system for storing uploaded attachments?' option under
the 'Compose' tab of IMP configuration, all works well...
Will have to find out what is that VFS thingy...
Hope it helps someone...
But i can add something... I only have those problems with attachments
on imp. I also use gollem and it accepts files up to the size it
should be accepting them.
If anyone requires any configuration details, just ask...
produces a patch, this is just going to sit here.
Assigned to Michael Slusarz
State ⇒ Assigned
trying to be condescending, and I am sorry you took it that way.
submitted a bug and you said approximately the following: "Even though
I haven't tested it, I don't think this bug is real. And you haven't
done your homework because it's in plain sight in the mailing list
archives and in the bug index, but I haven't taken the time to verify
that claim. Please don't bother us again, we're busy." Don't you
understand now?
Let me take this further and explain to you how you *should* have
replied, so that you may learn from this experience. (Side note: Yeah,
I am being condescending on purpose, but in a sarcastic way.)
First, before any reply, if you think this bug has been discussed
before you absolutely need to search for the URL. If you can't be
bothered to do that you just can't reply telling me that this bug has
already been discussed. That would be considered rude. Let other
people search for yourself instead and see the bug closed without your
intervention. If you think it is so evident that the bug has already
been discussed, it is so much in plain sight, it shouldn't be hard for
you to find a URL. Taking the time to find the URL also has the
advantage of preventing you from being wrong, as you were with me.
When you give a URL you have proof of your claim: the bug really has
been discussed before.
Assuming you have determined that the bug hasn't been discussed
before, the second step is, still before any reply, trying to
reproduce that bug, assuming you have any interest in fixing it. If
you don't have any interest in fixing it then you can just ignore the
bug, other people will try. If you can't reproduce it, you reply
something along the lines of "Couldn't reproduce. Test setup: mysql
X.Y.Z, IMP 3.X, Apache X.", you set the state to feedback and you wait
for more info. If you can reproduce it, then you may confirm the bug.
Assuming you still are interested in fixing it, you may proceed to
actually fixing it.
Through all these steps, if you are not interested in fixing the bug,
just leave it alone. Other people will be interested.
unfortunately, I am not a paid support agent of Horde. I would
really love to help everyone, and if I had a URL close at hand I
would have given it. Instead I suggested a place you could search.
URL didn't exist. And this time, you were indeed wrong. Another
guideline: don't suggest places where one can search. That's
condescending. It's exactly as if I explained to you the initial steps
in the bug fixing process (oops, I already did that it seems).
=== BACK TO ON TOPIC STUFF ===
not the place to be sending around 10 MB files but this religious
argument has already taken place on the mailing lists and I will not
address it anymore here.
using anywhere near 320 MB of memory so it is clearly not
reproducible for everyone.
Can you elaborate on what versions were used in your test setup? Were
you using the VFS for storing attachments? Let's try to isolate the bug.
you don't believe it is.
can't send attachments of size N or bigger". It is about "I can't send
attachments of the size IMP has been configured to accept". If I set
the maximum to 10M, I can send attachments up to about 1M. If I set
the maximum to 2M, it's about 500K. As I go smaller, the maximum size
goes smaller too. But the actual limit is never equal to what IMP has
been setup to accept. That's the problem.
Analogy: the speedometer on my car tells me I'm going at 100 km/h but
I'm really at 50. Then I go down to 50, it tells me I'm at 25. That's
a calibration problem, and it's the same kind of problem that this bug
is about.
State ⇒ Feedback
Priority ⇒ 1. Low
trying to be condescending, and I am sorry you took it that way.
As far as providing a URL where you could find information -
unfortunately, I am not a paid support agent of Horde. I would really
love to help everyone, and if I had a URL close at hand I would have
given it. Instead I suggested a place you could search.
As for your statement that it is obviously IMP's fault for using too
much memory, I respectfully disagree. First of all, internet mail is
not the place to be sending around 10 MB files but this religious
argument has already taken place on the mailing lists and I will not
address it anymore here. Second, I can send 10 MB files without PHP
using anywhere near 320 MB of memory so it is clearly not reproducible
for everyone. Third, this *is* a PHP issue as much as you don't
believe it is. IMP has absolutely NO control over how we retrieve the
data from the uploaded file, for example, since we are stuck with
functions PHP provides to do this. When we are encoding the file this
is done via a PHP function - we have no control over the memory usage
in this function either. Swapping data to disk to conserve memory is
not really an option since this really isn't the job of userland code.
Once again, if you search discussions on the mailing lists you will
find that this topic has come up before and the consensus is that PHP
is pretty darn liberal when it comes to allocating memory (I think
this is one of the items they were shooting to improve on in PHP 5).
I don't doubt that we could make some improvements but simply
complaining about the issue without knowing the underlying facts, or
offering to submit improvements, isn't going to help us out much.
your attachments?
or the mailing list
and I did search the mailing list and the bug index.
know. Maybe you could give one to me, that would be helpful.
clearly states that memory_limit should be DISABLED.
10M attachment, and IMP is already eating 320M and the disk is
trashing to swap. You really think removing the limit would make IMP
not consume that much memory? The problem is not that the limit is too
low, it's that IMP uses *too much memory*.
memory, but we are limited as to how PHP allocates memory. So if you
are having major issues with PHP's memory usage, you need to bug the
PHP folks.
hearing about the memory leak in malloc() that I found this morning.
And I've found a security flaw in "rm" the other day.
IMP really does eat too much memory when sending attachments, and
maybe if you had replied something like "could not reproduce" then I
would have believed you more than you simply dismissing my bug report
because you're too narrow minded to believe that there could be a tiny
chance that I really did witness that bug, spent countless hours
measuring, tweaking, profiling and generally hacking, searched the
documentation, mailing list archives, bug index, and finally, out of
desperation, resigned to file a bug.
your attachments? If so, you have not searched either the bug index
or the mailing list where the issue has been addressed numerous times.
Or have you run test.php, like docs/INSTALL tells you to, and it
clearly states that memory_limit should be DISABLED.
As for PHP usage, we can try to do as much as possible to free memory,
but we are limited as to how PHP allocates memory. So if you are
having major issues with PHP's memory usage, you need to bug the PHP
folks.
Horde and IMP configurations, attachments bigger than ~640KB can't be
uploaded.
upload_max_filesize = 2M
memory_limit = 8M
post_max_size = 8M
in this very issue. I had the memory limit on 120Mb. It was
exhausted trying to upload a 10Mb attachment. The max_allowed_packet was
too 10Mb and the actual query being received by mysql was a little
bigger than that, and that's when things go really wrong.
VFS uses DB. Right after executing mysql_query (DB/mysql.php), -failing
because of the max_allowed_packet thing-,
mysql::mysqlRaiseError is being called, which in turn calls
DB_Common::raiseError. Now, DB has a propery called last_query, which as
you may guess is the last executed query. Here is
a snippet of DB_Common::raiseError:
if ($userinfo === null) {
$userinfo = $this->last_query;
}
// ...
$tmp = PEAR::raiseError(null, $code, $mode, $options, $userinfo,
'DB_Error', true);
return $tmp;
Is it clear that last_query is more than 10Mb? The object returned includes
the hole query, and is returned to compose (in my case, at this point
80Mb was being used). Now, I had error_reporting == E_ALL. So the query
was passed to the logger (by value) and it then reaches the memory
limit. I changed error_reporting to E_WARNING (horde/config/conf.php).
Now, if I modified max_allow_packet so no error would happen, there would
be no problem. But have in mind that any kind of error -that is, DB
related-, would cause this same thing. So I also changed
DB_common::raiseError so the query would not be included in the report
for the error.
The peak memory usage reported by xdebug with the 10Mb attachment at
this point (no DB error, error_reporting==E_WARNING) is 58302568 bytes
(55Mb), which I still think is a little high, but I haven't looked into
it yet.
In the issue of the logging level, I complete understand if someone
would argue that if you use E_ALL you are responsable for having enough
memory to handle it, being that this is coded in VFS and DB.
There are other things, though, to worry about related to attachments
and memory: what happens if I upload N files, all below the
post_max_size, max_allowed_packet, upload_max_filesize and conf.php's
max_attachment_size and the send the message? Memory exhausted (and
maybe a message from your MTA saying file is too big). Thing is,
max_attachment_size is per attachment. So you can actually upload as
many files as you want.
What happens if you are writting an email and you have uploaded some
files (which have acceptable filesize) and you try to upload a file with
filesize larger thant post_max_size? PHP rejects the whole post so
$_POST is completly empty, an so it will your compose window (since the
state info was in the post).
Some, most, and maybe all of this stuff do not fall in the BUG categry,
it might be necesary an ENHACEMENT ticket for the number of attachments,
and maybe to try to find a solution to the post_max_size problem. The
POST goes to the filesystem, and only the no-file fields are actually
sent to PHP along with the info of the uploaded files, so I guess
post_max_size could be huge granted there is good control afterwards.
I guess this is more dev-list stuff, but I started comment on the bug
and I got carried away. Really, sorry for the long post.
Priority ⇒ 2. Medium
State ⇒ Unconfirmed
Queue ⇒ IMP
Type ⇒ Bug
Summary ⇒ Uploading attachment uses too much memory
upload_max_filesize = 10M
memory_limit = 40M
post_max_size = 11M
I figured that since memory_limit was 8M when upload_max_filesize was
2M (default values on RHEL 4), I should scale memory_limit by the same
factor as upload_max_filesize to be sure that it would work. I use a
post_max_size one meg bigger than upload_max_filesize to allow for
HTTP and encoding overhead.
I also increased the max_packet_size of MySQL 4.1.10a to 11M, one meg
more to allow for packet overhead.
I generated an attachment file of exactly 10MB using this command:
$ dd if=/dev/zero of=attach bs=$((1024*1024*10)) count=1
$ ls -l attach
-rw-rw-r-- 1 nomis80 nomis80 10485760 Jul 21 09:54 attach
When I uploaded the file the first time, I hit the memory_limit
according to this error message:
[client 10.10.1.153] PHP Fatal error: Allowed memory size of 41943040
bytes exhausted (tried to allocate 20971522 bytes) in
/usr/share/pear/DB/common.php on line 356, referer:
http://10.10.1.202/webmail/imp/compose.php?uniq=1121955430742
Allowed memory size of 41943040 bytes exhausted (tried to allocate 0 bytes)
I proceeded to increase the memory_limit by a factor of two every time
I received that error. Soon, I was at 320M, and this was causing my
server to trash while writing to disk swap memory. I could not figure
out exactly at which value I had to set memory_limit for the system to
accept my attachment.