Pixel does not fire on invalid emailor duplicate email addresses and duplicate IPs 求翻译

变更记录 & CodeIgniter 3.1.5 中文手册|用户手册|用户指南|中文文档From nginx-forum at nginx.us
1 07:58:59 2012
From: nginx-forum at nginx.us (youzhengchuan)
Date: Fri, 1 Jun :59 -0400 (EDT)
Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?=
=?UTF-8?B?QnVn?=
In-Reply-To:
References:
Message-ID:
Accordance with the above configuration, When the domain
"flvstorage.ppserver.org.cn," nginx nslookup results is just a one IP
address, this backend upstream can't be used.
my apology for my bad english.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,089#msg-227089
From brian at akins.org
1 11:43:31 2012
From: brian at akins.org (Brian Akins)
Date: Fri, 1 Jun :31 -0400
Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?=
=?UTF-8?B?QnVn?=
In-Reply-To:
References:
Message-ID:
Do something like this:
you need to define a resolver:
resolver 8.8.8.8; # or your dns serevrs
location / {
set $myupstream flvdownload.ppserver.org.
proxy_pass http://$
if you need it to use multiple ip addresses.
From ganaiwali at gmail.com
1 14:17:17 2012
From: ganaiwali at gmail.com (tariq wali)
Date: Fri, 1 Jun :17 +0000
Subject: ssl/tls https with red cross
In-Reply-To:
References:
Message-ID:
can anyone please tell why this error on my nginx instance with ssl/tls
10:06:12 [emerg] 20286#0:
SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key")
failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting
password error::PEM routines:PEM_do_header:bad password read
error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
10:06:20 [emerg] 866#0:
SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key")
failed (SSL: error::digital envelope
routines:EVP_DecryptFinal_ex:bad decrypt error::PEM
routines:PEM_do_header:bad decrypt error:140B0009:SSL
routines:SSL_CTX_use_PrivateKey_file:PEM lib)
On Wed, May 30, 2012 at 3:44 PM, tariq wali
> Looking to get some help from the group .
> We are running nginx/0.7.62 and notice that https with red-cross (either
> the connection is not encrypted or the page has some non https content and
> in my case it is no encrypted connection ) this is how thw config looks
server_name
login.jobsgulf.
ssl_certificate login.jobsgulf.com.
ssl_certificate_key login.jobsgulf.com.
ssl_protocols
SSLv3 TLSv1 ;
ssl_ciphers
HIGH:!aNULL:!MD5;
ssl_ciphers
ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
keepalive_timeout
ssl_session_cache
shared:SSL:10m;
ssl_session_timeout
> I want to know if we really have to explicitly specify ssl_protocols and
> ssl_ciphers in the config in order to be fully https for the said directive
> Also does it make sense to enable ssl/tls support on apache also ? in my
> case i have nginx in front of the apache .
> *Tariq Wali.*
*Tariq Wali.*
-------------- next part --------------
An HTML attachment was scrubbed...
From ne at vbart.ru
1 14:42:25 2012
From: ne at vbart.ru (Valentin V. Bartenev)
Date: Fri, 1 Jun :25 +0400
Subject: ssl/tls https with red cross
In-Reply-To:
References:
Message-ID:
On Friday 01 June :17 tariq wali wrote:
> can anyone please tell why this error on my nginx instance with ssl/tls
10:06:12 [emerg] 20286#0:
> SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key")
> failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting
> password error::PEM routines:PEM_do_header:bad password read
> error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)
10:06:20 [emerg] 866#0:
> SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/login.jobsgulf.com.key")
> failed (SSL: error::digital envelope
> routines:EVP_DecryptFinal_ex:bad decrypt error::PEM
> routines:PEM_do_header:bad decrypt error:140B0009:SSL
> routines:SSL_CTX_use_PrivateKey_file:PEM lib)
Nginx doesn't know passphrase for your private key file. You need to remove it.
wbr, Valentin V. Bartenev
From sammyraul1 at gmail.com
4 02:01:34 2012
From: sammyraul1 at gmail.com (Sammy Raul)
Date: Mon, 4 Jun :34 +0900
Subject: Video Streaming using non http backend, Ref ngx_drizzle
Message-ID:
I am trying to stream video it can be mp4, flv anything using nginx.
The video streams in the form of 1024 size will be available from the
backend non-http server.
For achieveing this I followed the ngx_http_drizzle source.
I wrote an upstream handler and followed most of the source code from
ngx_http_drizzle.
I have few questions or to be more precise I did not understood how the
output from drizzle is being streamed to the client.
1) In ngx_http_drizzle_output.c the function ngx_http_drizzle_submit_mem is
the place where it is setting the output filter, Is it also sending the
response i.e the stream to the client at this point, or it is some other
2) What I need to do to send my video contents to the client, I followed
the drizzle example but setting output and sending stream to the client,
how I can achieve this. I have 1024B avaialble at one point and I want to
send this to the client till the backend server has no stream to send and
the client should be able to play the content.
3) Is it possible to send the video stream to the client with the browser.
Can someone who knows about this, please explain how it works. What changes
I need to make.
It would be highly appreciated if anyone explains this.
-------------- next part --------------
An HTML attachment was scrubbed...
From nginx-forum at nginx.us
4 06:31:20 2012
From: nginx-forum at nginx.us (zestsh)
Date: Mon, 4 Jun :20 -0400 (EDT)
Subject: Would like to implement WebSocket support
In-Reply-To:
References:
Message-ID:
is there some discussing about the future websocket implementation?
from the roadmap, we couldn't get to know any new information.
Thank you.
??? Wrote:
-------------------------------------------------------
> This feature will be implement in the 1.3 branch,
> you can see the
> roadmap here: http://trac.nginx.org/nginx/roadmap
> Or you can use my tcp proxy module as an
> alternative temporarily :
> https://github.com/yaoweibin/nginx_tcp_proxy_modul
Alexandr Gomoliako :
> >> I want to use websockets in my appliaction
> server. My provider has
> >> always in front of the application server an
> nginx-server.
> >> And since nginx currently doesn't support
> websockets I have a problem.
> >> So I just wanted to ask, how is the progress
> about proxiing websocket
> >> communications?
> >> I would be very great and I could imagine that
> other users may ask for
> >> that, too in the near future.
> > I've been playing with websockets for awhile now
> and I don't think it
> > can make a difference for your provider. Real
> time web application are
> > really expensive to handle, each frame costs
> almost as much as
> > keepalive request, but you don't usually expect
> hundreds of requests
> > from each client every second. It's like
> streaming video by 100 bytes
> > at a time.
> > So, it has to be some kind of frame multiplexing
> over a single
> > connection with backend and even then it's still
> a lot to handle.
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,134#msg-227134
From sammyraul1 at gmail.com
4 08:15:34 2012
From: sammyraul1 at gmail.com (sammy_raul)
Date: Mon, 4 Jun :34 -0700 (PDT)
Subject: Video Streaming using non http backend, Ref ngx_drizzle
In-Reply-To:
References:
Message-ID:
Anything on this, just a small hint on how I can configure the output filter
would be highly appreciated.
View this message in context: http://nginx..nabble.com/Video-Streaming-using-non-http-backend-Ref-ngx-drizzle-tp0237.html
Sent from the nginx mailing list archive at Nabble.com.
From nginx-forum at nginx.us
4 08:18:34 2012
From: nginx-forum at nginx.us (youzhengchuan)
Date: Mon, 4 Jun :34 -0400 (EDT)
Subject: =?UTF-8?B?UmU6IE5naW54LVVwc3RyZWFtLXByb3h5IG5leHQgdXBzdHJlYW0t5oOK5aSp5aSn?=
=?UTF-8?B?QnVn?=
In-Reply-To:
References:
Message-ID:
thanks Brian
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,139#msg-227139
From nginx-forum at nginx.us
4 08:33:19 2012
From: nginx-forum at nginx.us (zestsh)
Date: Mon, 4 Jun :19 -0400 (EDT)
Subject: Would like to implement WebSocket support
In-Reply-To:
References:
Message-ID:
will the websocket implementation in the nginx 1.3 branch work same with
the tcp_proxy_module?
if not, what will be it look like? I hope nginx developer geeks give
some clues
about functionality or related api provided.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,140#msg-227140
From agentzh at gmail.com
4 14:13:14 2012
From: agentzh at gmail.com (agentzh)
Date: Mon, 4 Jun :14 +0800
Subject: Video Streaming using non http backend, Ref ngx_drizzle
In-Reply-To:
References:
Message-ID:
On Mon, Jun 4, 2012 at 10:01 AM, Sammy Raul
> I am trying to stream video it can be mp4, flv anything using nginx.
> The video streams in the form of 1024 size will be available from the
> backend non-http server.
I think this can be done trivially via ngx_lua module while still
achieving good performance. Here is a small example that demonstrates
how to meet your requirements with a little Lua:
location /api {
content_by_lua '
local sock, err = ngx.socket.tcp()
if not sock then
ngx.log(ngx.ERR, "failed to get socket: ", err)
ngx.exit(500)
sock:settimeout(1000)
local ok, err = sock:connect("some.backend.host", 12345)
if not ok then
ngx.log(ngx.ERR, "failed to connect to upstream: ", err)
ngx.exit(502)
local bytes, err = sock:send("some query")
if not bytes then
ngx.log(ngx.ERR, "failed to send query: ", err)
ngx.exit(502)
while true do
local data, err, partial = sock:receive(1024)
if not data then
if err == "closed" then
if partial then
ngx.print(partial)
ngx.exit(ngx.OK)
ngx.log(ngx.ERR, "error reading data: ", err)
ngx.exit(502)
ngx.print(data)
ngx.flush(true)
See the documentation for details:
http://wiki.nginx.org/HttpLuaModule
> For achieveing this I followed the ngx_http_drizzle source.
> I wrote an upstream handler and followed most of the source code from
> ngx_http_drizzle.
As the author of ngx_drizzle, I suggest you start from trying out
ngx_lua. Customizing ngx_drizzle for your needs requires a *lot* of
work. The C approach should only be attempted when Lua is indeed too
slow for your purpose, which is not very likely for many applications
Also, please note that ngx_drizzle does not support strict
non-buffered data output. So, for downstream connections that are slow
to write, data will still accumulate in RAM without control. On the
other hand, the ngx_lua sample given above does not suffer from this
> I have few questions or to be more precise I did not understood how the
> output from drizzle is being streamed to the client.
> 1) In ngx_http_drizzle_output.c the function ngx_http_drizzle_submit_mem is
> the place where it is setting the output filter, Is it also sending the
> response i.e the stream to the client at this point, or it is some other
> function?
Nope. Sending output buffers to the output filter chain is done by the
ngx_http_drizzle_output_bufs function.
> 2) What I need to do to send my video contents to the client, I followed the
> drizzle example but setting output and sending stream to the client, how I
> can achieve this. I have 1024B avaialble at one point and I want to send
> this to the client till the backend server has no stream to send and the
> client should be able to play the content.
Basically, you can call the ngx_http_output_filter function, just as
other nginx upstream modules.
> 3) Is it possible to send the video stream to the client with the browser.
I do not quite follow this question.
Best regards,
From nginx-forum at nginx.us
4 23:32:32 2012
From: nginx-forum at nginx.us (ptiseo)
Date: Mon, 4 Jun :32 -0400 (EDT)
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
Message-ID:
Had nginx 1.0.5 running on a Fedora 15 system just fine for months.
Upgraded the server from F15 to F17. At first, all seems well, but over
time, I keep getting 500 errors on proxied sites. Logs say: "socket()
failed (24: Too many open files) while connecting to upstream". Has
anyone else had this experience? If so, what's the root cause?
Had to revert server back to a backup to get sites functional.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,162#msg-227162
From reallfqq-nginx at yahoo.fr
5 00:15:07 2012
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Mon, 4 Jun :07 -0400
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
Hmmm... Apparently there seem to be some official packages of Nginx in
Fedora repositories.
However, there have been some updates of the 1.0.x branch there. 1.0.5
seems to be far outdated.
For example the latest seems to be a 1.0.15-4 version of the package:
http://lists.fedoraproject.org/pipermail/package-announce/2012-May/081214.html
I can't check much, since I don't have Fedora. I just did little online
research on the Fedora-Announce
mailing-list
Hope my 2 cents helped,
On Mon, Jun 4, 2012 at 7:32 PM, ptiseo
> Had nginx 1.0.5 running on a Fedora 15 system just fine for months.
> Upgraded the server from F15 to F17. At first, all seems well, but over
> time, I keep getting 500 errors on proxied sites. Logs say: "socket()
> failed (24: Too many open files) while connecting to upstream". Has
> anyone else had this experience? If so, what's the root cause?
> Had to revert server back to a backup to get sites functional.
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,162#msg-227162
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From steeeeeveee at gmx.net
5 01:16:06 2012
From: steeeeeveee at gmx.net (Steve)
Date: Tue, 05 Jun :06 +0200
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
-------- Original-Nachricht --------
> Datum: Mon, 4 Jun :32 -0400 (EDT)
> Von: "ptiseo"
> An: nginx at nginx.org
> Betreff: Upgrade From Fedora 15 to 17: nginx Doesn\'t Work
> Had nginx 1.0.5 running on a Fedora 15 system just fine for months.
Months? MONTHS? So you are not a new *nix user?
> Upgraded the server from F15 to F17. At first, all seems well, but over
> time, I keep getting 500 errors on proxied sites. Logs say: "socket()
> failed (24: Too many open files) while connecting to upstream". Has
> anyone else had this experience? If so, what's the root cause?
The root cause you ask? You must be joking. I mean... how hard is it to interpret "Too many open files"?
> Had to revert server back to a backup to get sites functional.
Ohhh boy. All you need to do is increase the open file limit in /etc/sysctl.conf and /etc/security/limits.conf.
In my installation I currently have...
... in /etc/sysctl.conf:
fs.file-max = 5049800
... in /etc/security/limits.conf:
I assume you used the same nginx.conf like in the old install? So no need for me to mention worker_rlimit_nofile. Right?
Setting/getting file limits is really basic linux system admin knowledge. I don't want to be harsh but not knowing that and going back from a fresh installed Fedora 17 to a backup of Fedora 15 because of the above error is crazy. You need to spend some time educating yourself in how to maintain a *nix system.
And while at it... please take your time to learn how to use Google:
http://www.google.com/search?q=Too+many+open+files&ie=utf-8&oe=utf-8&aq=t
If you don't find the solution in the first 10 or 20 links then I am going to eat xx xxxxx!
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
From nginx-forum at nginx.us
5 01:45:36 2012
From: nginx-forum at nginx.us (ptiseo)
Date: Mon, 4 Jun :36 -0400 (EDT)
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
@steve: nginx seems to attract the hostile and malpadapted? I've seen
more arrogant a**es in this forum than most other places. You guys need
it's just software. No need for you to go nuts like that. It
just shows badly on you.
The reason I restore from backup is because I needed that proxy online
for development. And, I have used Linux for a while that can be counted
in more than months. Do you know what they say about "assume"?
I did Google. I saw that worked for some and not for others. I tried it,
it didn't work for me. My file-max setting was already some 200K.
So, let me ask this, why would I need to increase open file limit
anyways? This is a low traffic proxy.
@BR: Thanks for not being as bad as steve. I did notice that Fedora does
not have an up-to-date package. For now, I will stay with the backup and
spin up another virtual machine to see if I can test further.
If anyone has any other ideas than the first 20 Google hits, I'd love to
hear of them. Thx.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,165#msg-227165
From steeeeeveee at gmx.net
5 02:03:25 2012
From: steeeeeveee at gmx.net (Steve)
Date: Tue, 05 Jun :25 +0200
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
-------- Original-Nachricht --------
> Datum: Mon, 4 Jun :36 -0400 (EDT)
> Von: "ptiseo"
> An: nginx at nginx.org
> Betreff: Re: Upgrade From Fedora 15 to 17: nginx Doesn\'t Work
> @steve: nginx seems to attract the hostile and malpadapted? I've seen
> more arrogant a**es in this forum than most other places. You guys need
> it's just software. No need for you to go nuts like that. It
> just shows badly on you.
LOL. It's almost 4am over here in Europe and I was sitting here reading stuff (fighting with sleep) and from time to time looking at the nginx mailing list and then saw your post and was falling almost off my chair. Could not resist and had to post. ;)
> The reason I restore from backup is because I needed that proxy online
> for development. And, I have used Linux for a while that can be counted
> in more than months. Do you know what they say about "assume"?
> I did Google. I saw that worked for some and not for others. I tried it,
> it didn't work for me. My file-max setting was already some 200K.
In sysctl.conf? Or /etc/security/limits.conf? Does your system use PAM?
> So, let me ask this, why would I need to increase open file limit
> anyways? This is a low traffic proxy.
Well.... you have obviously the need else nginx would not complain about a low open file descriptor limit. Looks like you configured nginx to use a lot of descriptors. But how can I tell without having seen your nginx configuration (I left my crystal ball in the office)?
If you want real good help then post your nginx.conf, the output of "ulimit -a", the content of /etc/sysctl.conf, the content of /etc/security/limits.conf, the output of "ls -lah /etc/security/limits.d/*" and the content of files found in /etc/security/limits.d/
> @BR: Thanks for not being as bad as steve. I did notice that Fedora does
> not have an up-to-date package. For now, I will stay with the backup and
> spin up another virtual machine to see if I can test further.
> If anyone has any other ideas than the first 20 Google hits, I'd love to
> hear of them. Thx.
http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/
http://blog.unixy.net/2010/11/nginx-accept-failed-24-too-many-open-files-while-accepting-new-connection/
http://forum.nginx.org/read.php?2,187416
http://forum.nginx.org/read.php?2,61252
http://forum.nginx.org/read.php?2,13111
> Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,165#msg-227165
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!
Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003Ka
From jim at ohlste.in
5 02:08:14 2012
From: jim at ohlste.in (Jim Ohlstein)
Date: Mon, 4 Jun :14 -0400
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
PM, "ptiseo"
> @steve: nginx seems to attract the hostile and malpadapted? I've seen
> more arrogant a**es in this forum than most other places. You guys need
> it's just software. No need for you to go nuts like that. It
> just shows badly on you.
Hardly the case. This is a pretty well mannered mailing list compared to
some to which I subscribe.
But, to be constructive, please do not top post. It's very confusing when
trying to follow a threaded discussion.
So, please answer the question asked -do you have an entry in your
nginx.conf for "worker_rlimit_nofile"?
Posting your full nginx.conf might help.
> The reason I restore from backup is because I needed that proxy online
> for development. And, I have used Linux for a while that can be counted
> in more than months. Do you know what they say about "assume"?
> I did Google. I saw that worked for some and not for others. I tried it,
> it didn't work for me. My file-max setting was already some 200K.
To which settings(s) are you referring?
> So, let me ask this, why would I need to increase open file limit
> anyways? This is a low traffic proxy.
Maybe an issue with how you've configured *your* system which has nothing
to do with nginx? Not to be one of those hostile, maladaped, arrogant
people to whom you referred, but this isn't a Fedora mailing list. Perhaps
you can find help there in determining what process(es) is/are using all of
those file descriptors.
Maybe one of them will hold your hand and not hurt
your feelings in the process. Calling people names is certainly *not* a
good way to get people to help you.
> @BR: Thanks for not being as bad as steve. I did notice that Fedora does
> not have an up-to-date package. For now, I will stay with the backup and
> spin up another virtual machine to see if I can test further.
> If anyone has any other ideas than the first 20 Google hits, I'd love to
> hear of them. Thx.
Jim Ohlstein
-------------- next part --------------
An HTML attachment was scrubbed...
From sammyraul1 at gmail.com
5 04:06:23 2012
From: sammyraul1 at gmail.com (sammy_raul)
Date: Mon, 4 Jun :23 -0700 (PDT)
Subject: Video Streaming using non http backend, Ref ngx_drizzle
In-Reply-To:
References:
Message-ID:
Thanks agentzh for explaining so well.
When I am connected to the backend server I am getting buffer which I am
sending to the client like this:
ngx_http_ccn_send_output_bufs(ngx_http_request_t *r,
ngx_http_upstream_ccn_peer_data_t *dp, const unsigned char *data,
size_t data_size)
ngx_http_upstream_t
ngx_chain_t
/* allocate a buffer for your response body */
b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
if (b == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
/* attach this buffer to the buffer chain */
out.next = NULL;
/* adjust the pointers of the buffer */
b->pos = (u_char *)
b->last = b->pos + data_size - 1;
b->memory = 1;
/* this buffer is in memory */
b->last_buf = 1;
/* this is the last buffer in the buffer chain */
if ( ! u->header_sent ) {
fprintf(stdout, "ngx_http_ccn_send_output_bufs u->header_sent\n");
r->headers_out.status = NGX_HTTP_OK;
/* set the Content-Type header */
r->headers_out.content_type.data =
(u_char *) "application/octet-stream";
r->headers_out.content_type.len =
sizeof("application/octet-stream") - 1;
r->headers_out.content_type_len =
sizeof("application/octet-stream") - 1;
rc = ngx_http_send_header(r);
if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
fprintf(stdout, "ngx_http_ccn_send_output_bufs header sent
error\n");
u->header_sent = 1;
fprintf(stdout, "ngx_http_ccn_send_output_bufs
u->header_sent=%d\n",u->header_sent);
rc = ngx_http_output_filter(r, &out);
if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
this function I am calling everytime I am receiving data (1024) from the
when it is end of stream I am calling
ngx_http_finalize_request(r, rc);
but it is not working as expected which is like playing the video file in
the browser
I didn't follow the lua module yet, but will look into it
Is there anything I am doing wrong while setting the output buffer, do I
need to change this line b->last_buf = 1;
or something else.
View this message in context: http://nginx..nabble.com/Video-Streaming-using-non-http-backend-Ref-ngx-drizzle-tp0247.html
Sent from the nginx mailing list archive at Nabble.com.
From agentzh at gmail.com
5 04:16:57 2012
From: agentzh at gmail.com (agentzh)
Date: Tue, 5 Jun :57 +0800
Subject: Video Streaming using non http backend, Ref ngx_drizzle
In-Reply-To:
References:
Message-ID:
On Tue, Jun 5, 2012 at 12:06 PM, sammy_raul
> ? ?/* adjust the pointers of the buffer */
> ? ?b->pos = (u_char *)
> ? ?b->last = b->pos + data_size - 1;
> ? ?b->memory = 1; ? ?/* this buffer is in memory */
> ? ?b->last_buf = 1; ?/* this is the last buffer in the buffer chain */
Setting b->last_buf to 1 means the current buf is the last buf in the
whole response body stream in this context (actually it is the
indicator for the end of the output data stream). So you must not set
this for every single buf.
Also, you should never set this flag in case you're in a subrequest or
things will break.
> this function I am calling everytime I am receiving data (1024) from the
> when it is end of stream I am calling
> ngx_http_finalize_request(r, rc);
Call ngx_http_send_header once and call ngx_http_output_filter
multiple times (as needed).
If you need strict non-buffered output behaivor, you have to *wait*
for the downstream to flush out *all* the data before continuing
reading data from upstream. You can check out how the
ngx_http_upstream (in non-buffered mode) and ngx_lua modules do this.
> I didn't follow the lua module yet, but will look into it
I strongly recommend it because it should save you a *lot* of time (I guess) :)
> Is there anything I am doing wrong while setting the output buffer, do I
> need to change this line b->last_buf = 1;
> or something else.
See above.
P.S. C let's go scripting! :D
From nginx-forum at nginx.us
5 07:33:24 2012
From: nginx-forum at nginx.us (speedfirst)
Date: Tue, 5 Jun :24 -0400 (EDT)
Subject: Can't upload big files via nginx as reverse proxy
Message-ID:
In my env, the layout is:
In the client, there is a
control. I tried to upload a
file with size of 3.7MB. In the client request, the content type is
"multipart/form-data", and there is an "Expect: 100-continue" header.
Through tcpdump, I could see nginx immediately return an "HTTP/1.1 100
Continue" response, and started to read data. After buffering the
uploaded data, nginx then started to send them to jetty. However in this
time, no "Expect: 100-continue" header was proxied because HTTP/1.0 is
After sending part of data, nginx stopped continuing to proxy the rest
of data, but the connection is kept. After 30s, jetty reports time out
exception and returned an response. Nginx finally proxied this response
back to client.
I simply merged all the tcp segments which was sent from nginx to jetty,
and found only 400K bytes are proxied.
My nginx config is quite simple, just
listen 80;
location / {
proxy_pass http://
All proxy buffer config was not explicitly set so the default values
were applied. I tried to "proxy_" and re-do the experiment
above and find the result was same.
I also tried to observe the temp file written by nginx but it's
automatically removed when everything is done. Any way to keep it?
Therefore, I'm wondering is this expected? Did I make mistakes for
configuring proxy buffers? Do I have to use the third party "upload"
module (http://www.grid.net.ru/nginx/upload.en.html) to make it work?
Many thanks.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,175#msg-227175
From nginx-forum at nginx.us
5 07:39:09 2012
From: nginx-forum at nginx.us (speedfirst)
Date: Tue, 5 Jun :09 -0400 (EDT)
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
by the way, my client request is a POST request.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,176#msg-227176
From mdounin at mdounin.ru
5 07:54:17 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :17 +0400
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
On Tue, Jun 05, 2012 at 03:33:24AM -0400, speedfirst wrote:
> In my env, the layout is:
> In the client, there is a
control. I tried to upload a
> file with size of 3.7MB. In the client request, the content type is
> "multipart/form-data", and there is an "Expect: 100-continue" header.
> Through tcpdump, I could see nginx immediately return an "HTTP/1.1 100
> Continue" response, and started to read data. After buffering the
> uploaded data, nginx then started to send them to jetty. However in this
> time, no "Expect: 100-continue" header was proxied because HTTP/1.0 is
So far this is expected behaviour.
> After sending part of data, nginx stopped continuing to proxy the rest
> of data, but the connection is kept. After 30s, jetty reports time out
> exception and returned an response. Nginx finally proxied this response
> back to client.
> I simply merged all the tcp segments which was sent from nginx to jetty,
> and found only 400K bytes are proxied.
This is obviously not expected.
Anything in error log?
Could you please provide tcpdump and debug
It would be also cool to see which version of nginx you are
using, i.e. please provide "nginx -V" output, and a full config.
> My nginx config is quite simple, just
> server {
listen 80;
location / {
proxy_pass http://
This misses at least "client_max_body_size" as by default 3.5MB
upload will be just rejected.
> All proxy buffer config was not explicitly set so the default values
> were applied. I tried to "proxy_" and re-do the experiment
> above and find the result was same.
Proxy buffers, as well as proxy_buffering, doesn't matter, as it
only affects sending response from an upstream to a client.
> I also tried to observe the temp file written by nginx but it's
> automatically removed when everything is done. Any way to keep it?
client_body_in_file_
See here for details:
http://nginx.org/r/client_body_in_file_only
Maxim Dounin
From nginx-forum at nginx.us
5 09:24:48 2012
From: nginx-forum at nginx.us (speedfirst)
Date: Tue, 5 Jun :48 -0400 (EDT)
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
Thanks for your quick response.
In my config, client_max_body_size is set to 0. Does it mean
"unlimited"?
I made this test in two version, 0.9.3 and 1.2.0. Both have the same
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,180#msg-227180
From nginx-forum at nginx.us
5 10:08:30 2012
From: nginx-forum at nginx.us (speedfirst)
Date: Tue, 5 Jun :30 -0400 (EDT)
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
retry the test while client_max_body_size=0;
The size of tmp file is as expected, about 3.7M :
root at zm-dev03:/opt/data/tmp/nginx/client# ll
-rw------- 1 speedfirst speedfirst 2-06-06 01:27
Here is the client script from curl:
curl -v -u admin at dev03.eng.test.com:test123 -F
"file=@test.filename=test.type=application/x-compressed-tar"
"http://dev03.eng.test.com/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
Here is the debug_http nginx log.
01:27:35 [debug] 15621#0: *5 http process request line
01:27:35 [debug] 15621#0: *5 http request line: "POST
/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset HTTP/1.1"
01:27:35 [debug] 15621#0: *5 http uri:
"/service/home/admin at dev03.eng.test.com/"
01:27:35 [debug] 15621#0: *5 http args:
"fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http exten: ""
01:27:35 [debug] 15621#0: *5 http process request header
01:27:35 [debug] 15621#0: *5 http header: "Authorization:
Basic YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw=="
01:27:35 [debug] 15621#0: *5 http header: "User-Agent:
curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
01:27:35 [debug] 15621#0: *5 http header: "Host:
dev03.eng.vmware.com"
01:27:35 [debug] 15621#0: *5 http header: "Accept: */*"
01:27:35 [debug] 15621#0: *5 http header: "Content-Length:
01:27:35 [debug] 15621#0: *5 http header: "Expect:
100-continue"
01:27:35 [debug] 15621#0: *5 http header: "Content-Type:
multipart/form- boundary=----------------------------f9dbdf4f72b4"
01:27:35 [debug] 15621#0: *5 http header done
01:27:35 [debug] 15621#0: *5 rewrite phase: 0
01:27:35 [debug] 15621#0: *5 test location: "/"
01:27:35 [debug] 15621#0: *5 using configuration "/"
01:27:35 [debug] 15621#0: *5 generic phase: 4
01:27:35 [debug] 15621#0: *5 generic phase: 5
01:27:35 [debug] 15621#0: *5 access phase: 6
01:27:35 [debug] 15621#0: *5 access phase: 7
01:27:35 [debug] 15621#0: *5 post access phase: 8
01:27:35 [debug] 15621#0: *5 send 100 Continue
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http finalize request: -4,
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset" a:1,
01:27:35 [debug] 15621#0: *5 http request count:2 blk:0
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.eng.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [notice] 15621#0: *5 a client request body is
buffered to a temporary file
/opt/zimbra/data/tmp/nginx/client/, client: 10.112.117.117,
server: , request: "POST
/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset HTTP/1.1",
host: "dev03.test.com"
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:35 [debug] 15621#0: *5 http read client request body
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http client request body recv
01:27:35 [debug] 15621#0: *5 http client request body rest
01:27:35 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
... ... ... <-- tens of similar log entries
01:27:37 [debug] 15621#0: *5 http read client request body
01:27:37 [debug] 15621#0: *5 http client request body recv
01:27:37 [debug] 15621#0: *5 http client request body rest
01:27:37 [debug] 15621#0: *5 http client request body recv
01:27:37 [debug] 15621#0: *5 http client request body rest
01:27:37 [debug] 15621#0: *5 http run request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:37 [debug] 15621#0: *5 http read client request body
01:27:37 [debug] 15621#0: *5 http client request body recv
01:27:37 [debug] 15621#0: *5 http client request body rest 0
01:27:37 [debug] 15621#0: *5 http init upstream, client
01:27:37 [debug] 15621#0: *5 http script copy:
"X-Forwarded-For: "
01:27:37 [debug] 15621#0: *5 http script var:
"10.112.117.117"
01:27:37 [debug] 15621#0: *5 http script copy: "
01:27:37 [debug] 15621#0: *5 http script copy: "Host: "
01:27:37 [debug] 15621#0: *5 http script var:
"dev03.test.com"
01:27:37 [debug] 15621#0: *5 http script copy: "
01:27:37 [debug] 15621#0: *5 http script copy: "Connection:
01:27:37 [debug] 15621#0: *5 http proxy header:
"Authorization: Basic
YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw=="
01:27:37 [debug] 15621#0: *5 http proxy header: "User-Agent:
curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
01:27:37 [debug] 15621#0: *5 http proxy header: "Accept:
01:27:37 [debug] 15621#0: *5 http proxy header:
"Content-Length: 3914486"
01:27:37 [debug] 15621#0: *5 http proxy header:
"Content-Type: multipart/form-
boundary=----------------------------f9dbdf4f72b4"
01:27:37 [debug] 15621#0: *5 http proxy header:
"POST /service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset
X-Forwarded-For: 10.112.117.117
Host: dev03.test.com
Connection: close
Authorization: Basic
YWRtaW5Aem0tZGV2MDMuZW5nLnZtd2FyZS5jb206dGVzdDEyMw==
User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
Accept: */*
Content-Length: 3914486
Content-Type: multipart/form-
boundary=----------------------------f9dbdf4f72b4
01:27:37 [debug] 15621#0: *5 http cleanup add:
01:27:37 [debug] 15621#0: *5 zmauth: prepare route for proxy
... ..<-- choose the upstream route
01:27:37 [debug] 15621#0: *5 zmauth: prepare upstream
connection, try: 1
01:27:37 [debug] 15621#0: *5 http upstream connect: -2
01:27:37 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:40 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:40 [debug] 15621#0: *5 http upstream send request
01:27:40 [debug] 15621#0: *5 http upstream send request
01:27:44 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
... ... <--- tens of similar log entires
01:28:19 [debug] 15621#0: *5 http upstream send request
01:28:19 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:22 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:22 [debug] 15621#0: *5 http upstream process header
01:28:22 [debug] 15621#0: *5 http proxy status 200 "200 OK"
01:28:22 [debug] 15621#0: *5 http proxy header: "Date: Tue,
05 Jun :37 GMT"
01:28:22 [debug] 15621#0: *5 http proxy header:
"Content-Type: text/ charset=utf-8"
01:28:22 [debug] 15621#0: *5 http proxy header: "Connection:
01:28:22 [debug] 15621#0: *5 http proxy header done
01:28:22 [debug] 15621#0: *5 HTTP/1.1 200 OK
Server: nginx
Date: Tue, 05 Jun :22 GMT
Content-Type: text/ charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
01:28:22 [debug] 15621#0: *5 http write filter: l:0 f:0
01:28:22 [debug] 15621#0: *5 http cacheable: 0
01:28:22 [debug] 15621#0: *5 http upstream process upstream
01:28:23 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:23 [debug] 15621#0: *5 http upstream send request
01:28:57 [debug] 15621#0: *5 http upstream request:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http upstream process upstream
01:28:57 [debug] 15621#0: *5 http output filter
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http copy filter:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http postpone filter
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http chunk: 22
01:28:57 [debug] 15621#0: *5 http write filter: l:0 f:0
01:28:57 [debug] 15621#0: *5 http copy filter: 0
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http upstream exit:
01:28:57 [debug] 15621#0: *5 finalize http upstream request:
01:28:57 [debug] 15621#0: *5 finalize http proxy request
01:28:57 [debug] 15621#0: *5 free rr peer 1 0
01:28:57 [debug] 15621#0: *5 close http upstream connection:
01:28:57 [debug] 15621#0: *5 http upstream temp fd: -1
01:28:57 [debug] 15621#0: *5 http output filter
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http copy filter:
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http postpone filter
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
00007FFF51B162A0
01:28:57 [debug] 15621#0: *5 http chunk: 0
01:28:57 [debug] 15621#0: *5 http write filter: l:1 f:0
01:28:57 [debug] 15621#0: *5 http write filter limit 0
01:28:57 [debug] 15621#0: *5 http write filter
01:28:57 [debug] 15621#0: *5 http copy filter: 0
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:57 [debug] 15621#0: *5 http finalize request: 0,
"/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset" a:1, c:1
01:28:57 [debug] 15621#0: *5 set http keepalive handler
01:28:57 [debug] 15621#0: *5 http close request
01:28:57 [debug] 15621#0: *5 http log handler
01:28:57 [debug] 15621#0: *5 hc free: 0000 0
01:28:57 [debug] 15621#0: *5 hc busy: 0000 0
01:28:57 [debug] 15621#0: *5 tcp_nodelay
01:28:57 [debug] 15621#0: *5 http keepalive handler
01:28:57 [debug] 15621#0: *5 http keepalive handler
01:28:57 [info] 15621#0: *5 client 10.112.117.117 closed
keepalive connection
01:28:57 [debug] 15621#0: *5 close http connection: 14
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,182#msg-227182
From mdounin at mdounin.ru
5 10:47:51 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :51 +0400
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
On Tue, Jun 05, 2012 at 06:08:30AM -0400, speedfirst wrote:
01:27:37 [debug] 15621#0: *5 http read client request body
01:27:37 [debug] 15621#0: *5 http client request body recv
01:27:37 [debug] 15621#0: *5 http client request body rest 0
Ok, so the request body is read from a client without any problems.
01:27:37 [debug] 15621#0: *5 zmauth: prepare route for proxy
01:27:37 [debug] 15621#0: *5 zmauth: prepare upstream
> connection, try: 1
Are you able to reproduce the problem without 3rd party
modules/patches?
(Unlikely it's related in this particular case,
but just to make sure.)
01:27:37 [debug] 15621#0: *5 http upstream connect: -2
01:27:37 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:37 [debug] 15621#0: *5 http upstream send request
01:27:40 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:27:40 [debug] 15621#0: *5 http upstream send request
01:27:40 [debug] 15621#0: *5 http upstream send request
01:27:44 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:19 [debug] 15621#0: *5 http upstream send request
01:28:19 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:22 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream send request
01:28:22 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:22 [debug] 15621#0: *5 http upstream process header
01:28:22 [debug] 15621#0: *5 http proxy status 200 "200 OK"
On the other hand, it looks like sending of the request is still
in progress, and upstream server replies before the request was
completely sent.
It might indicate it just doesn't wait long
enough, and the problem is in the backend (and slow connectivity
to the backend).
I don't see any pause in request sending you've claimed in your
initial message.
(and see below)
01:28:22 [debug] 15621#0: *5 http proxy header: "Date: Tue,
> 05 Jun :37 GMT"
01:28:22 [debug] 15621#0: *5 http proxy header:
> "Content-Type: text/ charset=utf-8"
01:28:22 [debug] 15621#0: *5 http proxy header: "Connection:
01:28:22 [debug] 15621#0: *5 http proxy header done
01:28:22 [debug] 15621#0: *5 HTTP/1.1 200 OK
> Server: nginx
> Date: Tue, 05 Jun :22 GMT
> Content-Type: text/ charset=utf-8
> Transfer-Encoding: chunked
> Connection: keep-alive
01:28:22 [debug] 15621#0: *5 http write filter: l:0 f:0
01:28:22 [debug] 15621#0: *5 http cacheable: 0
01:28:22 [debug] 15621#0: *5 http upstream process upstream
01:28:23 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
01:28:23 [debug] 15621#0: *5 http upstream send request
01:28:57 [debug] 15621#0: *5 http upstream request:
> "/service/home/admin at dev03.test.com/?fmt=tgz&resolve=reset"
On the other hand, here is ~ 30s pause you've probably talked
It might indicate that upstream tries to send headers
before "receiving and interpreting a request message" (as per HTTP
RFC2616 it should do it "after"), which confuses nginx and makes
it to think further body bytes aren't needed.
You may want to dig further into what goes on on the backend to
understand the real problem.
Maxim Dounin
From nginx-forum at nginx.us
5 11:26:49 2012
From: nginx-forum at nginx.us (speedfirst)
Date: Tue, 5 Jun :49 -0400 (EDT)
Subject: Can't upload big files via nginx as reverse proxy
In-Reply-To:
References:
Message-ID:
>On the other hand, it looks like sending of the request is still
>in progress, and upstream server replies before the request was
>completely sent. It might indicate it just doesn't wait long
>enough, and the problem is in the backend (and slow connectivity
>to the backend).
>I don't see any pause in request sending you've claimed in your
>initial message.
>On the other hand, here is ~ 30s pause you've probably talked
>about. It might indicate that upstream tries to send headers
>before "receiving and interpreting a request message" (as per HTTP
>RFC2616 it should do it "after"), which confuses nginx and makes
>it to think further body bytes aren't needed.
>understand the real problem.
Yes, I agree and also notice where the real problem is. I just created a
fake backend (which simply receives the uploaded data and writes into
disk), nginx correctly pass all the data to it.
Let me hack the backend code to see what's wrong. Will update if I found
something new.
Thanks for your inspired comments :)
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,190#msg-227190
From nginx-forum at nginx.us
5 12:43:44 2012
From: nginx-forum at nginx.us (colorando)
Date: Tue, 5 Jun :44 -0400 (EDT)
Subject: Setting up nginx as Visual Studio 2010 project
Message-ID:
I'd like to make a Visual Studio project from the nginx source and then
build it. Has anyone already done this experience and can tell me how to
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,198#msg-227198
From mdounin at mdounin.ru
5 14:30:50 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :50 +0400
Subject: nginx-1.3.1
Message-ID:
Changes with nginx 1.3.1
05 Jun 2012
*) Security: now nginx/Windows ignores trailing dot in URI path
component, and does not allow URIs with ":$" in it.
Thanks to Vladimir Kochetkov, Positive Research Center.
*) Feature: the "proxy_pass", "fastcgi_pass", "scgi_pass", "uwsgi_pass"
directives, and the "server" directive inside the "upstream" block,
now support IPv6 addresses.
*) Feature: the "resolver" directive now support IPv6 addresses and an
optional port specification.
*) Feature: the "least_conn" directive inside the "upstream" block.
*) Feature: it is now possible to specify a weight for servers while
using the "ip_hash" directive.
*) Bugfix: a segmentation fault might occur in a worker process if the
"image_filter" the bug had appeared in 1.3.0.
*) Bugfix: nginx could not be built with ngx_cpp_test_ the bug
had appeared in 1.1.12.
*) Bugfix: access to variables from SSI and embedded perl module might
not work after reconfiguration.
Thanks to Yichun Zhang.
*) Bugfix: in the ngx_http_xslt_filter_module.
Thanks to Kuramoto Eiji.
*) Bugfix: memory leak if $geoip_org variable was used.
Thanks to Denis F. Latypoff.
*) Bugfix: in the "proxy_cookie_domain" and "proxy_cookie_path"
directives.
Maxim Dounin
From mdounin at mdounin.ru
5 14:31:21 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :21 +0400
Subject: nginx-1.2.1
Message-ID:
Changes with nginx 1.2.1
05 Jun 2012
*) Security: now nginx/Windows ignores trailing dot in URI path
component, and does not allow URIs with ":$" in it.
Thanks to Vladimir Kochetkov, Positive Research Center.
*) Feature: the "debug_connection" directive now supports IPv6 addresses
and the "unix:" parameter.
*) Feature: the "set_real_ip_from" directive and the "proxy" parameter
of the "geo" directive now support IPv6 addresses.
*) Feature: the "real_ip_recursive", "geoip_proxy", and
"geoip_proxy_recursive" directives.
*) Feature: the "proxy_recursive" parameter of the "geo" directive.
*) Bugfix: a segmentation fault might occur in a worker process if the
"resolver" directive was used.
*) Bugfix: a segmentation fault might occur in a worker process if the
"fastcgi_pass", "scgi_pass", or "uwsgi_pass" directives were used and
backend returned incorrect response.
*) Bugfix: a segmentation fault might occur in a worker process if the
"rewrite" directive was used and new request arguments in a
replacement used variables.
*) Bugfix: nginx might hog CPU if the open file resource limit was
*) Bugfix: nginx might loop infinitely over backends if the
"proxy_next_upstream" directive with the "http_404" parameter was
used and there were backup servers specified in an upstream block.
*) Bugfix: adding the "down" parameter of the "server" directive might
cause unneeded client redistribution among backend servers if the
"ip_hash" directive was used.
*) Bugfix: socket leak.
Thanks to Yichun Zhang.
*) Bugfix: in the ngx_http_fastcgi_module.
Maxim Dounin
From mdounin at mdounin.ru
5 14:31:59 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :59 +0400
Subject: security advisory
Message-ID:
Vladimir Kochetkov, Positive Research Center, discovered a
security problem in nginx/Windows, which might allow security
restrictions bypass (CVE-).
There are many ways to access the same file when working under
Windows, and nginx failed to account for all of them.
result, it was possible to bypass security restrictions like
location /directory/ {
by requesting a file as "/directory::$index_allocation/file", or
"/directory:$i30:$index_allocation/file", or "/directory./file".
The problem is fixed in nginx/Windows 1.3.1, 1.2.1.
For older versions the following configuration can be used as a
workaround:
location ~ "(\./|:\$)" {
Maxim Dounin
From reallfqq-nginx at yahoo.fr
5 15:37:43 2012
From: reallfqq-nginx at yahoo.fr (B.R.)
Date: Tue, 5 Jun :43 -0400
Subject: Upgrade From Fedora 15 to 17: nginx Doesn't Work
In-Reply-To:
References:
Message-ID:
> @BR: Thanks for not being as bad as steve.
'As bad'? I was really trying to help. To my opinion, it's always better to
have the least outdated version, so 1.0.15 is way better than 1.0.5.
If I appeared arrogant to you, well, nvm...
> Calling people names is certainly *not* a good way to get people to help
I totally agree. I won't lose anymore time on your case.
On Mon, Jun 4, 2012 at 10:08 PM, Jim Ohlstein
> On Jun 4,
PM, "ptiseo"
> > @steve: nginx seems to attract the hostile and malpadapted? I've seen
> > more arrogant a**es in this forum than most other places. You guys need
> > it's just software. No need for you to go nuts like that. It
> > just shows badly on you.
> Hardly the case. This is a pretty well mannered mailing list compared to
> some to which I subscribe.
> But, to be constructive, please do not top post. It's very confusing when
> trying to follow a threaded discussion.
> So, please answer the question asked -do you have an entry in your
> nginx.conf for "worker_rlimit_nofile"?
> Posting your full nginx.conf might help.
> > The reason I restore from backup is because I needed that proxy online
> > for development. And, I have used Linux for a while that can be counted
> > in more than months. Do you know what they say about "assume"?
> > I did Google. I saw that worked for some and not for others. I tried it,
> > it didn't work for me. My file-max setting was already some 200K.
> To which settings(s) are you referring?
> > So, let me ask this, why would I need to increase open file limit
> > anyways? This is a low traffic proxy.
> Maybe an issue with how you've configured *your* system which has nothing
> to do with nginx? Not to be one of those hostile, maladaped, arrogant
> people to whom you referred, but this isn't a Fedora mailing list. Perhaps
> you can find help there in determining what process(es) is/are using all of
> those file descriptors.
Maybe one of them will hold your hand and not hurt
> your feelings in the process. Calling people names is certainly *not* a
> good way to get people to help you.
> > @BR: Thanks for not being as bad as steve. I did notice that Fedora does
> > not have an up-to-date package. For now, I will stay with the backup and
> > spin up another virtual machine to see if I can test further.
> > If anyone has any other ideas than the first 20 Google hits, I'd love to
> > hear of them. Thx.
> Jim Ohlstein
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From manens at grossac.org
5 16:04:02 2012
From: manens at grossac.org (Florent Manens)
Date: Tue, 5 Jun :02 +0200 (CEST)
Subject: Client Certificate verification for mail
In-Reply-To:
Message-ID:
Hi NGINX team,
I can read here :
http://mailman.nginx.org/pipermail/nginx/2007-March/000825.html
and in this thread :
http://mailman.nginx.org/pipermail/nginx-ru/2009-July/026304.html
that the client certificate verification is not supported by NGINX (and that there is no RFE for it).
We want to implement client certificate verification for IMAP and POP connection and we plan to rely on NGINX for scalability.
I think that it is possible to implement client certificate verification in NGINX but I still need to know :
* if it is a trivial task
* if I can do it only with addons
* why it isn't already in NGINX core ?
I will apreciate if someone can give me directions on that subject.
Best regards,
-------------- next part --------------
An HTML attachment was scrubbed...
From mdounin at mdounin.ru
5 16:12:48 2012
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 5 Jun :48 +0400
Subject: Client Certificate verification for mail
In-Reply-To:
References:
Message-ID:
On Tue, Jun 05, 2012 at 06:04:02PM +0200, Florent Manens wrote:
> Hi NGINX team,
> I can read here :
> http://mailman.nginx.org/pipermail/nginx/2007-March/000825.html
> and in this thread :
> http://mailman.nginx.org/pipermail/nginx-ru/2009-July/026304.html
> that the client certificate verification is not supported by NGINX (and that there is no RFE for it).
> We want to implement client certificate verification for IMAP and POP connection and we plan to rely on NGINX for scalability.
> I think that it is possible to implement client certificate verification in NGINX but I still need to know :
> * if it is a trivial task
More or less.
> * if I can do it only with addons
> * why it isn't already in NGINX core ?
The second link (or, rather, Igor's reply to it) explains the
It's more or less useless for large scale installations
where nginx mail proxy is generally used.
Maxim Dounin
From tdgh2323 at hotmail.com
5 17:01:34 2012
From: tdgh2323 at hotmail.com (Joseph Cabezas)
Date: Tue, 5 Jun :34 +0000
Subject: client_max_body_size for a location
{} ? Possible?
Message-ID:
Is it possible to specify a client_max_body_size and
client_body_buffer_size specifically for a location? If so how?
I need to allow higher buffers for a section that hosts an application.
-------------- next part --------------
An HTML attachment was scrubbed...
From tdgh2323 at hotmail.com
5 17:26:09 2012
From: tdgh2323 at hotmail.com (Joseph Cabezas)
Date: Tue, 5 Jun :09 +0000
Subject: Graph nginx by error codes and requests per second? Cacti? or some
Message-ID:
Does anybody have a monitoring system in place by nginx error code... 500, 200, 404, 444.... and did you do this with cacti or php4nagios?
-------------- next part --------------
An HTML attachment was scrubbed...
From tdgh2323 at hotmail.com
5 17:31:25 2012
From: tdgh2323 at hotmail.com (Joseph Cabezas)
Date: Tue, 5 Jun :25 +0000
Subject: client_max_body_size for a location
{} ? Possible?
In-Reply-To:
References:
Message-ID:
Answering myself partially
location /wordpress/wp-admin { client_max_body_size 1m;
<-- does that apply for evert sub directory
such as /wordpress/wp-admin/dir1 /wordpress/wp-admin/dir2/app.php etc
From: tdgh2323 at hotmail.com
To: nginx at nginx.org
Subject: client_max_body_size for a location
{} ? Possible?
Date: Tue, 5 Jun :34 +0000
Is it possible to specify a client_max_body_size and
client_body_buffer_size specifically for a location? If so how?
I need to allow higher buffers for a section that hosts an application.
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From jim at ohlste.in
5 17:57:17 2012
From: jim at ohlste.in (Jim Ohlstein)
Date: Tue, 05 Jun :17 -0400
Subject: Netflix Open Connect Appliance Software
Message-ID:
This is a cross posting from freebsd-stable. I thought it worth giving
Igor et al a shout out:
>From https://signup.netflix.com/openconnect/software :
Open Source Software
Open Connect Appliance Software
Netflix delivers streaming content using a combination of intelligent
clients, a central control system, and a network of Open Connect appliances.
When designing the Open Connect Appliance Software, we focused on these
fundamental design goals:
Use of Open Source software
Ability to efficiently read from disk and write to network sockets
High-performance HTTP delivery
Ability to gather routing information via BGP
Operating System
For the operating system, we use FreeBSD version 9.0. This was selected
for its balance of stability and features, a strong development
community and staff expertise. We will contribute changes we make as
part of our project to the community through the FreeBSD committers on
Web server
We use the nginx web server for its proven scalability and performance.
Netflix audio and video is served via HTTP.
Routing intelligence proxy
We use the BIRD Internet routing daemon to enable the transfer of
network topology from ISP networks to the Netflix control system that
directs clients to sources of content.
Acknowledgements
We would would like to express our thanks to the FreeBSD community, the
nginx community, and Ondrej and the BIRD team for providing excellent
open source software. We also work directly with Igor, Maxim, Andrew,
Sergey, Ruslan and the rest of the team at nginx.com, who provide superb
development support for our project.
Contact the Open Connect team at openconnectappliance at netflix.com.
If you are interested in joining the Content Delivery or another team at
Netflix, apply at www.netflix.com/jobs
Jim Ohlstein
From kworthington at gmail.com
5 18:14:57 2012
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 5 Jun :57 -0400
Subject: nginx-1.3.1
In-Reply-To:
References:
Message-ID:
Hello Nginx Users,
Now available: Nginx 1.3.1 For Windows http://goo.gl/Xvccu (32-bit and
64-bit versions)
These versions are to support legacy users who are already using
Cygwin based builds of Nginx. Officially supported native Windows
binaries are at nginx.org.
Thank you,
Kevin Worthington
kworthington *@* (gmail]
[dot} {com)
http://kevinworthington.com/
http://twitter.com/kworthington
On Tue, Jun 5, 2012 at 10:30 AM, Maxim Dounin
> Changes with nginx 1.3.1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 05 Jun 2012
> ? ?*) Security: now nginx/Windows ignores trailing dot in URI path
> ? ? ? component, and does not allow URIs with ":$" in it.
> ? ? ? Thanks to Vladimir Kochetkov, Positive Research Center.
> ? ?*) Feature: the "proxy_pass", "fastcgi_pass", "scgi_pass", "uwsgi_pass"
> ? ? ? directives, and the "server" directive inside the "upstream" block,
> ? ? ? now support IPv6 addresses.
> ? ?*) Feature: the "resolver" directive now support IPv6 addresses and an
> ? ? ? optional port specification.
> ? ?*) Feature: the "least_conn" directive inside the "upstream" block.
> ? ?*) Feature: it is now possible to specify a weight for servers while
> ? ? ? using the "ip_hash" directive.
> ? ?*) Bugfix: a segmentation fault might occur in a worker process if the
> ? ? ? "image_filter" the bug had appeared in 1.3.0.
> ? ?*) Bugfix: nginx could not be built with ngx_cpp_test_ the bug
> ? ? ? had appeared in 1.1.12.
> ? ?*) Bugfix: access to variables from SSI and embedded perl module might
> ? ? ? not work after reconfiguration.
> ? ? ? Thanks to Yichun Zhang.
> ? ?*) Bugfix: in the ngx_http_xslt_filter_module.
> ? ? ? Thanks to Kuramoto Eiji.
> ? ?*) Bugfix: memory leak if $geoip_org variable was used.
> ? ? ? Thanks to Denis F. Latypoff.
> ? ?*) Bugfix: in the "proxy_cookie_domain" and "proxy_cookie_path"
> ? ? ? directives.
> Maxim Dounin
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From matthieu.tourne at gmail.com
5 18:26:51 2012
From: matthieu.tourne at gmail.com (Matthieu Tourne)
Date: Tue, 5 Jun :51 -0700
Subject: Graph nginx by error codes and requests per second? Cacti? or
some other?
In-Reply-To:
References:
Message-ID:
On Tue, Jun 5, 2012 at 10:26 AM, Joseph Cabezas
> Does anybody have a monitoring system in place by nginx error code... 500,
> 200, 404, 444.... and did you do this with cacti or php4nagios?
You can take a look at the nginx-lua module (on the logby branch) :
https://github.com/chaoslawful/lua-nginx-module/tree/logby
There is an example in the README :
https://github.com/chaoslawful/lua-nginx-module/blob/logby/README
Look for log_by_lua, and log_by_lua_file.
You can use it to aggregate values, and use another location to report
aggregated data (using content_by_lua) and feed it in your own system.
We use OpenTSDB (http://opentsdb.net/) to keep aggregating data in time series.
Hope that helps,
From kworthington at gmail.com
5 18:34:24 2012
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 5 Jun :24 -0400
Subject: nginx-1.2.1
In-Reply-To:
References:
Message-ID:
Hello Nginx Users,
Now available: Nginx 1.2.1 For Windows http://goo.gl/QlrVs (32-bit and
64-bit versions)
These versions are to support legacy users who are already using
Cygwin based builds of Nginx. Officia}

我要回帖

更多关于 invalid email format 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信