as I noticed here are already quite some people already using Lucee with
cloud setups, so you probably also faced (and solved) the problem of how to
share sessions between Lucee instances.
As far as I found out it seems that there is more than one single solution
to this. What is the best way to set this up (e.g. in Amazon AWS)?
I’m relatively new to Railo and Lucee, but I try to keep up as quick as
possible (and I’m working with a lot of people that understand very much
about it!).
So any help is appreciated.
I’ll add, Ortus has a commercial Couchbase extension for Railo that can
store sessions externally on a distributed cluster. We haven’t updated it
yet for Lucee, but you can let me know if you’re interested.
Thanks!
~BradOn Wednesday, February 18, 2015 at 10:39:30 AM UTC-6, Siegfried Wagner wrote:
Hi,
as I noticed here are already quite some people already using Lucee with
cloud setups, so you probably also faced (and solved) the problem of how to
share sessions between Lucee instances.
As far as I found out it seems that there is more than one single solution
to this. What is the best way to set this up (e.g. in Amazon AWS)?
I’m relatively new to Railo and Lucee, but I try to keep up as quick as
possible (and I’m working with a lot of people that understand very much
about it!).
So any help is appreciated.
The easiest way to avoid the need for a shared sessions (that I’ve heard
of) is to use Nginx as your web server / load balancer. Within the upstream
directive of the web server used as load balancer, simply set the ip_hash
directive, and this will ensure that the same instance of Lucee is always
used for a given client, obviating the need for shared sessions.
Aria Media Sagl
Via Rompada 40
6987 Caslano
Switzerland
I’ll add, Ortus has a commercial Couchbase extension for Railo that can
store sessions externally on a distributed cluster. We haven’t updated it
yet for Lucee, but you can let me know if you’re interested.
Thanks!
~Brad
On Wednesday, February 18, 2015 at 10:39:30 AM UTC-6, Siegfried Wagner wrote:
Hi,
as I noticed here are already quite some people already using Lucee with
cloud setups, so you probably also faced (and solved) the problem of how to
share sessions between Lucee instances.
As far as I found out it seems that there is more than one single
solution to this. What is the best way to set this up (e.g. in Amazon AWS)?
I’m relatively new to Railo and Lucee, but I try to keep up as quick as
possible (and I’m working with a lot of people that understand very much
about it!).
So any help is appreciated.
as I noticed here are already quite some people already using Lucee with cloud setups, so you probably also faced (and solved) the problem of how to share sessions between Lucee instances.
As far as I found out it seems that there is more than one single solution to this. What is the best way to set this up (e.g. in Amazon AWS)?
I’m relatively new to Railo and Lucee, but I try to keep up as quick as possible (and I’m working with a lot of people that understand very much about it!).
I believe that is handled by the this.sessionCluster setting. When set to
true, the external storage is favored over anything that might be in memory
on the server. When set to false, the in-memory storage on the server is
favored over the external storage. I’d need to do a test, but I THINK
session is always saved on request end, but ONLY read if cluster is set to
true.
My initial understanding of that setting would be for people who would like
long-term session persistence across restarts, but don’t have to worry
about users potentially hitting a new server every request. I’m guessing
it would also do nicely for any form of load balancing with server
affinity. Basically, the user would get stuck to a single instance and use
the session in memory, but in the event that server died and they get
kicked over to another, it can pull their session there as well.
This is fairly easy to test with the Railo couchbase extension. Just hit a
page and see how many reads and writes happened on the session bucket.
I understand your point. Unless I’m mistaken, I’ve always thought that
sessions are being read on every request with CF/Railo/Lucee which would
always need to read from the session store (be it local in-memory or
off-server (database, redis, couchbase, whatever).
Otherwise, how do you tell the Lucee App to read from two different
session storages? Any time you write something to the session, also have it
write to your off-site storage? Would something like that be best captured
in OnRequestEnd() or something that would copy the session data after the
request is done to your common session store?
In the case of being bounced to a new server, do you have OnSessionStart()
have it read from off-site to populate the local session but if it doesn’t
exist create the session in both places?
On Thu Feb 19 2015 at 10:15:48 AM Jochem van Dieten <@Jochem_van_Dieten> wrote:
On Thu, Feb 19, 2015 at 4:42 PM, Dan Kraus wrote:
Amazon has sticky sessions with their load balancers but I think a good
goal for scaling is round robin this way.
Round robin is not good for scaling. With round robin every request needs
to consult the common session storage. With sticky sessions the server can
consult the local storage, which is typically in RAM and faster.
This way, if any server goes down, the user’s session isn’t stuck in a
dead server.
But that doesn’t require round robin. Even if you enable sticky sessions,
when a cluster member goes away the loadbalancer will stop sending it
traffic and direct the user to a different server. To make sure the session
is available on that server, what you need is:
for servers to send all their session changes to the common session
storage;
for servers to read the session from the common session storage only
when they receive a request which does have a session ID which is not
present at the current server.
For scalability this is much better: write only changes, read once per
session after a server failure, instead of read on every request. The only
thing you have to make sure on your loadbalancer configuration is that it
the sticky server goes away, a new sticky server is picked and the request
doesn’t start to round-robin.
I understand your point. Unless I’m mistaken, I’ve always thought that
sessions are being read on every request with CF/Railo/Lucee which would
always need to read from the session store (be it local in-memory or
off-server (database, redis, couchbase, whatever).
Otherwise, how do you tell the Lucee App to read from two different session
storages? Any time you write something to the session, also have it write
to your off-site storage? Would something like that be best captured in
OnRequestEnd() or something that would copy the session data after the
request is done to your common session store?
In the case of being bounced to a new server, do you have OnSessionStart()
have it read from off-site to populate the local session but if it doesn’t
exist create the session in both places?On Thu Feb 19 2015 at 10:15:48 AM Jochem van Dieten <@Jochem_van_Dieten> wrote:
On Thu, Feb 19, 2015 at 4:42 PM, Dan Kraus wrote:
Amazon has sticky sessions with their load balancers but I think a good
goal for scaling is round robin this way.
Round robin is not good for scaling. With round robin every request needs
to consult the common session storage. With sticky sessions the server can
consult the local storage, which is typically in RAM and faster.
This way, if any server goes down, the user’s session isn’t stuck in a
dead server.
But that doesn’t require round robin. Even if you enable sticky sessions,
when a cluster member goes away the loadbalancer will stop sending it
traffic and direct the user to a different server. To make sure the session
is available on that server, what you need is:
for servers to send all their session changes to the common session
storage;
for servers to read the session from the common session storage only
when they receive a request which does have a session ID which is not
present at the current server.
For scalability this is much better: write only changes, read once per
session after a server failure, instead of read on every request. The only
thing you have to make sure on your loadbalancer configuration is that it
the sticky server goes away, a new sticky server is picked and the request
doesn’t start to round-robin.
thats exactly the way the this.sessionCluster setting is supposed to be used.
I actually don’t like sticky sessions, since if a server is busy it would still get requests from users that have that server stored for their sickness.
Therefore I always use round robin or any other load balancing technique.
And I set the sessionCluster to true, in order to read the sessions every time the request is initialized.
GertSent from somewhere on the road
Am 19.02.2015 um 18:37 schrieb Brad Wood <@Brad_Wood>:
I believe that is handled by the this.sessionCluster setting. When set to true, the external storage is favored over anything that might be in memory on the server. When set to false, the in-memory storage on the server is favored over the external storage. I’d need to do a test, but I THINK session is always saved on request end, but ONLY read if cluster is set to true.
My initial understanding of that setting would be for people who would like long-term session persistence across restarts, but don’t have to worry about users potentially hitting a new server every request. I’m guessing it would also do nicely for any form of load balancing with server affinity. Basically, the user would get stuck to a single instance and use the session in memory, but in the event that server died and they get kicked over to another, it can pull their session there as well.
This is fairly easy to test with the Railo couchbase extension. Just hit a page and see how many reads and writes happened on the session bucket.
On Thu, Feb 19, 2015 at 11:31 AM, Dan Kraus <@Dan_Kraus> wrote:
I understand your point. Unless I’m mistaken, I’ve always thought that sessions are being read on every request with CF/Railo/Lucee which would always need to read from the session store (be it local in-memory or off-server (database, redis, couchbase, whatever).
Otherwise, how do you tell the Lucee App to read from two different session storages? Any time you write something to the session, also have it write to your off-site storage? Would something like that be best captured in OnRequestEnd() or something that would copy the session data after the request is done to your common session store?
In the case of being bounced to a new server, do you have OnSessionStart() have it read from off-site to populate the local session but if it doesn’t exist create the session in both places?
On Thu Feb 19 2015 at 10:15:48 AM Jochem van Dieten <@Jochem_van_Dieten> wrote:
On Thu, Feb 19, 2015 at 4:42 PM, Dan Kraus wrote:
Amazon has sticky sessions with their load balancers but I think a good goal for scaling is round robin this way.
Round robin is not good for scaling. With round robin every request needs to consult the common session storage. With sticky sessions the server can consult the local storage, which is typically in RAM and faster.
This way, if any server goes down, the user’s session isn’t stuck in a dead server.
But that doesn’t require round robin. Even if you enable sticky sessions, when a cluster member goes away the loadbalancer will stop sending it traffic and direct the user to a different server. To make sure the session is available on that server, what you need is:
for servers to send all their session changes to the common session storage;
for servers to read the session from the common session storage only when they receive a request which does have a session ID which is not present at the current server.
For scalability this is much better: write only changes, read once per session after a server failure, instead of read on every request. The only thing you have to make sure on your loadbalancer configuration is that it the sticky server goes away, a new sticky server is picked and the request doesn’t start to round-robin.
Amazon has sticky sessions with their load balancers but I think a good
goal for scaling is round robin this way. This way, if any server goes
down, the user’s session isn’t stuck in a dead server.
Using Database storage is one way to do it but depending on the traffic, I
might not want it on my regular database, something that just handles
sessions. It’s going to be slower than accessing from memory to pull it in
and out of the DB. Redis is one option. I haven’t put it into production
yet, only some testbeds, but I’m very likely going to be implementing it in
a big way. I just wrote up a blog post about it last week.
as I noticed here are already quite some people already using Lucee with
cloud setups, so you probably also faced (and solved) the problem of how to
share sessions between Lucee instances.
As far as I found out it seems that there is more than one single solution
to this. What is the best way to set this up (e.g. in Amazon AWS)?
I’m relatively new to Railo and Lucee, but I try to keep up as quick as
possible (and I’m working with a lot of people that understand very much
about it!).
So any help is appreciated.
Amazon has sticky sessions with their load balancers but I think a good
goal for scaling is round robin this way.
Round robin is not good for scaling. With round robin every request needs
to consult the common session storage. With sticky sessions the server can
consult the local storage, which is typically in RAM and faster.
This way, if any server goes down, the user’s session isn’t stuck in a dead
server.
But that doesn’t require round robin. Even if you enable sticky sessions,
when a cluster member goes away the loadbalancer will stop sending it
traffic and direct the user to a different server. To make sure the session
is available on that server, what you need is:
for servers to send all their session changes to the common session
storage;
for servers to read the session from the common session storage only when
they receive a request which does have a session ID which is not present at
the current server.
For scalability this is much better: write only changes, read once per
session after a server failure, instead of read on every request. The only
thing you have to make sure on your loadbalancer configuration is that it
the sticky server goes away, a new sticky server is picked and the request
doesn’t start to round-robin.
JochemOn Thu, Feb 19, 2015 at 4:42 PM, Dan Kraus wrote:
Due to money constraints, I am looking to create the most basic of set-ups:
2 X VPS with Railo/IIS7 & MySQL
I feel that it is best to use local connections to the database, for speed efficiency, so I will be using two way data synchronisation between the 2 databases.
When I create my lucee_sessions DB, can I create it locally [one for each server] and then have:
ServerA.lucee_sessions synchronised with ServerB.lucee_sessions
In this way, whenever a key is written to ServerA.lucee_sessions, it will update ServerB.lucee_sessions, accordingly.
Would I set:
this.sessionClusters = false;
Presumably:
this.sessionClusters = true;
Is only used for a lucee_sessions DB on its own dedicated DB server. Creating one single common session store, used by ServerA & ServerB