Wednesday, June 6, 2007

QoS for servers

guys now this is something to which you must reply & discuss..

currently i am analysing what can be QoS(quality of service) requirements of a server. Server can be web or database or anything.. Basically QoS helps in differentiating the kind of services offered to different clients.

pls comment on this, I am trying to list out some quantifiable requirements which clients ask for or essential ones... (sometimes some of these terms sounds boring, but delivering such things with the software is most critical & beneficial work...)

The major requirements for supporting QoS in software server are as follows:

  • Performance:
    • Throughput – min, peak, allowed burst, reward function for achieving throughput between [min, peak], penalty function for throughput < min
    • Response time / latency – distribution or percentile.
  • Availability: Probability that a service is available.
    • Duration for which service should be available.
    • Time-to-repair (TTR).
  • Accessibility: Probability measure denoting the success rate or chance of a successful service instantiation at a point in time. Requires scalable system.
  • Integrity / Transactional QoS
  • Reliability: Capable of maintaining the service and service quality. The number of failures per month/year represents a measure of reliability.
  • Security: Providing confidentiality and non-repudiation by authenticating the parties involved, encrypting messages, and providing access control.

2 comments:

Chetan Pathak said...

I hope I am on the right track with whatever I am suggesting. Please tell me if otherwise.

Under availability, you can maybe have mirrors or the ability to mirror, as a criteria. This may sound as a solution to the availability feature, but I think it can also be considered as a factor to decide the QoS of the server.

Also, under the security feature, other than "confidentiality and non-repudiation" (both of which primarily indicate a defence against over the line attacks), you can also include some factors depicting the security of the data-at-rest on the server itself.

Rahul said...

Thanks Chetan for replying..You are on right track..
I guess ability to mirror can be part of second-level design as it will ultimately contribute to the probability of whether service is available.

data-at-rest is a crucial point. Thanks for pointing this. I just read some info. about it & summerized at the end of post.

My intent of listing out of these QoS parameters is to model the system in such way that server hosting different applications together using Virtualization should be able to deliver QoS guarantees to each of the application i.e. there should be Performance Isolation. Here the applications can be heterogenous. So here the applications may have similar as well as different resource requirements e.g. CPU-intensive / pure disk / network intensive tasks. This is major challange.
So we need to do all tasks ranging from admission control to workload management. And obivously the aim is Maximum Utilization of resources & most importantly Maximum "goodput".

In yesterday's model, I am adding request dropping probability as another performance measure (which is linked with accessbility).

I purposefully havnt mentioned about virtualization as I wanted you to come with requirements of a server regardless of where it is deployed.
Comments awaited.


Data-at-rest ->
Over the time people mostly focussed on security of data-in-motion. data-at-rest is costly method in terms of performance, but many researchers coming up with better solutions.

I read a paper - A New Image-Database Encryption Based On A Hybrid Approach of
Data-at-Rest and Data-in-Motion Encryption Protocol (April 2004 ) by Ooi Bee Sien, Azman Samsudin, and Rahmat Budiarto. It talks about combining both approaches by encrypting data at client side so that overhead of server is eliminated as well as SSL is not required in such case.

Another paper "Taking a Hard-Line Approach
to Encryption"(March 2007) by Cameron Laird talks about Encryption-enabled hard-drive systems.

The central idea is to encrypt all data in real time as it is stored on a computer’s hard disk.
Encryption-enabled hard drives could be either external or internal, while the encryption hardware could be either part of the drive or an independent module.
Trusted Computing Group (TCG) industry consortium (www.trustedcomputinggroup.org) is working on the same.

& in another article "Cybersecurity Costs: Balancing Blanket Security with Real-World Practicality" (March-April 2007) by Chou, W.
Cybersecurity is divided into four separate components:
• Network cybersecurity
• Host cybersecurity—to protect applications and OS software.
• Storage cybersecurity—to secure stored data-at-rest.
• Administrative “cyber”-security—the noncybersecurity element, consisting of
human and process security.