Protection in General Purpose
Operating System
Unit 4
Index
• Overview
• File protection mechanisms
• User authentication
• Designing Trusted OS
• Security Policy
• Models of Security
• Trusted Operating System Design
Overview
• An operating system has two goals:
controlling shared access
 implementing an interface to allow that access
• Underneath those goals are support activities,
including identification and authentication, naming,
filing objects, scheduling, communication among
processes, and reclaiming and reusing objects
Overview cont.
• Operating system functions can be categorized
as:
 access control
identity and credential management
 information flow
 audit and integrity protection
• Each of these activities has security
implications.
File Protection Mechanisms
• Basic Forms of Protection
All-None Protection
Unacceptable for several reasons
– Lack of trust
– Too coarse
– Rise of sharing
– Complexity
– File listings
Basic Forms of Protection (Cont’d)
Group Protection
– Focused on identifying groups of users who had some
common relationship.
– All authorized users are separated into groups.
– A group may consist of several members working on a
common project, a
– department, a class, or a single user.
– The basis for group membership is need to share.
– A key advantage of the group protection approach is
its ease of
– implementation.
Basic Forms of Protection (Cont’d)
• Group Protection (Cont’d)
– Group affiliation: A single user cannot belong to two
groups.
– Multiple personalities: To overcome the one-person
one-group restriction, certain people might obtain
multiple accounts, permitting them, in effect, to be
multiple users.
– All groups: To avoid multiple personalities, the
system administrator may decide that Tom should
have access to all his files any time he is active.
– Limited sharing: Files can be shared only within
groups or with the world.
Basic Forms of Protection (Cont’d)
• Individual Permissions
– Persistent Permission
– Temporary Acquired Permission
• Unix+ operating systems provide an interesting
permission scheme based on a three-level user-group-
world hierarchy.
• The Unix designers added a permission called set
userid (suid)
• Per-Object and Per-User Protection
User Authentication
• Authentication mechanisms use any of three
qualities to confirm a user's identity.
– Something the user knows. Passwords, PIN
numbers, passphrases,
• a secret handshake, and mother's maiden name are examples
of what a user may know.
– Something the user has. Identity badges, physical
keys, license, or a uniform are common examples of
things people have that make them recognizable.
– Something the user is. These authenticators, called
biometrics, are based on a physical characteristic of
the user,
User Authentication(cont’d)
• Passwords as Authenticators
– Use of Passwords
– Passwords are mutually agreed-upon code words, assumed to be
known
– only to the user and the system.
• Suffer from some difficulties of use:
– Loss. Depending on how the passwords are implemented, it is
possible that no one will be able to replace a lost or forgotten
password
– Use. Supplying a password for each access to a file can be
inconvenient and time consuming.
– Disclosure. If a password is disclosed to an unauthorized
individual, the file becomes immediately accessible.
– Revocation.
Passwords as Authenticators
• Additional Authentication Information
– Using additional authentication information is
called multifactor authentication.
– Two forms of authentication (which is known as
two-factor authentication) are better than one,
assuming of course that the two forms are strong.
– But as the number of forms increases, so also does
the inconvenience.
Passwords as Authenticators
• Attacks on Passwords
• Some ways you might be able to determine a
user's password, in decreasing order of
difficulty.
– Try all possible passwords.
– Try frequently used passwords.
– Try passwords likely for the user.
– Search for the system list of passwords.
– Ask the user.
Passwords as Authenticators
• Attacks on Passwords (Cont’d)
• Loose-Lipped Systems
– E.g.,
WELCOME TO THE XYZ COMPUTING
SYSTEMS
ENTER USER NAME: adams
INVALID USER NAME / UNKNOWN
USER
ENTER USER NAME:
Passwords as Authenticators
• Attacks on Passwords (Cont’d)
• Loose-Lipped Systems (Cont’d)
– An alternative arrangement of the login sequence is
shown below.
WELCOME TO THE XYZ COMPUTING SYSTEMS
ENTER USER NAME: adams
ENTER PASSWORD: john
INVALID ACCESS
ENTER USER NAME:
Passwords as Authenticators
• Attacks on Passwords (Cont’d)
• Loose-Lipped Systems (Cont’d)
ENTER USER NAME: adams
ENTER PASSWORD: john
INVALID ACCESS
ENTER USER NAME: adams
ENTER PASSWORD: johnq
WELCOME TO THE XYZ COMPUTING
SYSTEMS
Passwords as Authenticators
• Attacks on Passwords (Cont’d)
• Exhaustive Attack
– In an exhaustive or brute force attack, the attacker
tries all possible passwords, usually in some
automated fashion
– Probable Passwords
– Passwords Likely for a User
Passwords as Authenticators
Attacks on Passwords (Cont’d)
• password guessing steps:
– no password
– the same as the user ID.
– is, or is derived from, the user's name
– common word list (for example, "password," "secret,"
"private") plus common names and patterns (for
example, "asdfg," "aaaaaa")
– short college dictionary
– complete English word list
Passwords as Authenticators
• One-Time Passwords
• Biometrics: Authentication Not Using
Passwords
• Identification vs Authentication
• Much reliable, but less effective
Designing Trusted Operating Systems
• An operating system is trusted if we have
confidence that it provides these four
services consistently and effectively
– Policy - every system can be described by its
requirements: statements of what the system
should do and how it should do it.
– Model - designers must be confident that the
proposed system will meet its requirements while
protecting appropriate objects and relationships.
Designing Trusted Operating Systems
– Design - designers choose a means to implement
it.
– Trust - trust in the system is rooted in two
aspects:
• FEATURES - the operating system has all the
necessary functionality needed to enforce the expected
security policy.
• ASSURANCE - the operating system has been
implemented in such a way that we have confidence it
will enforce the security policy correctly and effectively.
Trustworthy OS
• An OS is trusted if it provides:
– Memory protection
– Generation object access control
– User authentication
• In a consistent and effective manner.
• Why trusted OS, why not secure OS?
“Secure” Vs. “Trust”
• .
Security Policies
Security policy: statement of the security we expect the
system to enforce.
• Military Security Policy
– Based on protecting classified information.
– Each piece of information is ranked at a particular
sensitivity level, such as unclassified, restricted,
confidential, secret, or top secret.
– The ranks or levels form a hierarchy, and they
reflect an increasing order of sensitivity
Figure 1 Hierarchy of Sensitivities.
Least Sensitive
Military security policy
Figure 2 Compartments and Sensitivity Levels.
Compartments in a Military security plicy.
Figure 3 Association of Information and Compartments.
A single piece of information may belong to multiple compartments.
Terms
• Information falls under different degrees of sensitivity:
– Unclassified to top secret.
– Each sensitivity is determined by a rank. E.g., unclassified has rank
0.
• Need to know: enforced using compartments
– E.g., a particular project may need to use information which is both
top secret and secret. Solution; create a compartment to cover the
information in both.
– A compartment may include information across multiple
sensitivity levels.
• Clearance: A person seeking access to sensitive
information must be cleared. Clearance is expressed as a
combination: <rank; compartments>
Dominance relation
• Consider subject s and an object o.
– s <= o if an only if:
• rank_s <= rank_o and
• compartments_s subset compartments_o
– E,g, a subject can read an object only if:
• The clearance level of the subject is at least as high as that of the
information and
• The subject has a need to know about all compartments for which
the information is classified.
• E.g.information <secret, {Sweden}> can be read by someone with
clearance: <top_secret, {Sweden}> and <secret , {Sweden}> but
not by <top_secret, {Crypto}>
Figure 4 Commercial View of Sensitive Information.
Commercial security policies
What are some of the needs of a commercial policy?
Example: Chinese Wall Policy
Addresses needs of commercial organizations: legal, medical,
investment and accounting firms.
Key protection: conflict of interest.
Abstractions:
Objects: elementary objects such as files.
Company groups: at the next level, all objects concerning
a particular company are grouped together.
Conflict classes: all groups of objects for competing
companies are clustered together.
Figure 5 Chinese Wall Security Policy.
Chinese Wall Security Policy
Clark Wilson Model
•Defines tuples for every operation <userID,transformationProcedure,
{CDIs…}>
• userID: person who can perform the operation.
• transformationProcedure: performs only certain operations depending on
the data. E.g., writeACheck if the data’s integrity is mainted.
• CDIs: constrained data items: data items with certain attributes. E.g., when
the receiving clerk sends the delivery form to the accounting clerk… the
delivery form has been already “checked” by the receiving clerk.
Think of these as “stamps” of approval.
Security Models
While policies tell us what we want….
models tell us formally what conditions we need to enforce in order to
achieve a policy.
We study models for various reasons:
(i) test a particular policy for completeness and consistency.
(ii) Document a policy
(iii) Help conceptualize and design an implementation
(iv) Check whether an implementation meets its requirements.
Example models
(i) Bell LaPadula Model: to enforce confidentiality.
(ii) Biba Model: enforces integrity.
To understand this, we study a structure called Lattice.
lattice is a “partial - ordering of data” such that every data item
has a least upper bound and the greatest lower bound.
E.g., Military model is a lattice.
E.g., <secret, {Sweden}> and <secret, {France}> have a least
upper bound and a greatest lower bound.
Figure 6 Sample Lattice.
Bell LaPadula Model for Confidentiality
Tells us “what conditions” need to be met to satisfy confidentiality to
implement multi-level security policies (e.g., military policies):
Consider a security system with the following properties:
(i) system contains a set of subjects S.
(ii) a set of objects O.
(iii) each subject s in S and each object o in O has a fixed security class
(C(s), C(o)).
 In military security examples of class: secret, top secret etc…
(iv) Security classes are ordered by <= symbol.
Bell La Padula Model for Confidentiality
Properties:
• Simple security property: A subject s may have read access to
an object o, only if C(o) <= C(s).
• (* property) - A subject s who has read access to an object
o, may have write access to an object p only if C(o) <= C(p).
Figure 8 Subject, Object, and Rights.
Need for the two policies:
Definition of subject, object and access rights.
E.g., s can “r” or “read” object o.
Figure 7 Secure Flow of Information.
Bell LaPadula; read down, write up.
Biba Model for Integrity.
Bell LaPadula is only for confidentiality, how about integrity… come up with
a policy.
Biba Model for Integrity.
Simple policy: Subject s can modify (write) object o only if I(s) >= I(o).
Here I is similar to C, except I is called Integrity class.
Integrity *-Property:
If subject s has read access to object o with integrity level I(o), s can have
write access to object p only if I(o) >= I(p).
Why is the second policy important?
Trusted OS Design
• The policies tells us what we want.
• The model tells us the properties needed to satisfy for the policies to
succeed.
• Next: designing an OS which is trusted.
Trusted OS design principles-
• Principle of least privilege
• Economy of mechanism
• Open design
• Complete mediation
• Permission based
• Separation of privilege.
• Least common mechanism
• Ease of use.
Review: Overview of an Operating System’s Functions.
Figure 5-11 Security Functions of a Trusted
Operating System.
Key Features of a Trusted OS
• User identification and authentication (we already studied this).
• Access control:
• Mandatory
• Discretionary
• Role Based
• Complete mediation.
• Trusted path
• Audit
• Audit log reduction
• Intrusion detection.

Protection in general purpose operating system

  • 1.
    Protection in GeneralPurpose Operating System Unit 4
  • 2.
    Index • Overview • Fileprotection mechanisms • User authentication • Designing Trusted OS • Security Policy • Models of Security • Trusted Operating System Design
  • 3.
    Overview • An operatingsystem has two goals: controlling shared access  implementing an interface to allow that access • Underneath those goals are support activities, including identification and authentication, naming, filing objects, scheduling, communication among processes, and reclaiming and reusing objects
  • 4.
    Overview cont. • Operatingsystem functions can be categorized as:  access control identity and credential management  information flow  audit and integrity protection • Each of these activities has security implications.
  • 5.
    File Protection Mechanisms •Basic Forms of Protection All-None Protection Unacceptable for several reasons – Lack of trust – Too coarse – Rise of sharing – Complexity – File listings
  • 6.
    Basic Forms ofProtection (Cont’d) Group Protection – Focused on identifying groups of users who had some common relationship. – All authorized users are separated into groups. – A group may consist of several members working on a common project, a – department, a class, or a single user. – The basis for group membership is need to share. – A key advantage of the group protection approach is its ease of – implementation.
  • 7.
    Basic Forms ofProtection (Cont’d) • Group Protection (Cont’d) – Group affiliation: A single user cannot belong to two groups. – Multiple personalities: To overcome the one-person one-group restriction, certain people might obtain multiple accounts, permitting them, in effect, to be multiple users. – All groups: To avoid multiple personalities, the system administrator may decide that Tom should have access to all his files any time he is active. – Limited sharing: Files can be shared only within groups or with the world.
  • 8.
    Basic Forms ofProtection (Cont’d) • Individual Permissions – Persistent Permission – Temporary Acquired Permission • Unix+ operating systems provide an interesting permission scheme based on a three-level user-group- world hierarchy. • The Unix designers added a permission called set userid (suid) • Per-Object and Per-User Protection
  • 9.
    User Authentication • Authenticationmechanisms use any of three qualities to confirm a user's identity. – Something the user knows. Passwords, PIN numbers, passphrases, • a secret handshake, and mother's maiden name are examples of what a user may know. – Something the user has. Identity badges, physical keys, license, or a uniform are common examples of things people have that make them recognizable. – Something the user is. These authenticators, called biometrics, are based on a physical characteristic of the user,
  • 10.
    User Authentication(cont’d) • Passwordsas Authenticators – Use of Passwords – Passwords are mutually agreed-upon code words, assumed to be known – only to the user and the system. • Suffer from some difficulties of use: – Loss. Depending on how the passwords are implemented, it is possible that no one will be able to replace a lost or forgotten password – Use. Supplying a password for each access to a file can be inconvenient and time consuming. – Disclosure. If a password is disclosed to an unauthorized individual, the file becomes immediately accessible. – Revocation.
  • 11.
    Passwords as Authenticators •Additional Authentication Information – Using additional authentication information is called multifactor authentication. – Two forms of authentication (which is known as two-factor authentication) are better than one, assuming of course that the two forms are strong. – But as the number of forms increases, so also does the inconvenience.
  • 12.
    Passwords as Authenticators •Attacks on Passwords • Some ways you might be able to determine a user's password, in decreasing order of difficulty. – Try all possible passwords. – Try frequently used passwords. – Try passwords likely for the user. – Search for the system list of passwords. – Ask the user.
  • 13.
    Passwords as Authenticators •Attacks on Passwords (Cont’d) • Loose-Lipped Systems – E.g., WELCOME TO THE XYZ COMPUTING SYSTEMS ENTER USER NAME: adams INVALID USER NAME / UNKNOWN USER ENTER USER NAME:
  • 14.
    Passwords as Authenticators •Attacks on Passwords (Cont’d) • Loose-Lipped Systems (Cont’d) – An alternative arrangement of the login sequence is shown below. WELCOME TO THE XYZ COMPUTING SYSTEMS ENTER USER NAME: adams ENTER PASSWORD: john INVALID ACCESS ENTER USER NAME:
  • 15.
    Passwords as Authenticators •Attacks on Passwords (Cont’d) • Loose-Lipped Systems (Cont’d) ENTER USER NAME: adams ENTER PASSWORD: john INVALID ACCESS ENTER USER NAME: adams ENTER PASSWORD: johnq WELCOME TO THE XYZ COMPUTING SYSTEMS
  • 16.
    Passwords as Authenticators •Attacks on Passwords (Cont’d) • Exhaustive Attack – In an exhaustive or brute force attack, the attacker tries all possible passwords, usually in some automated fashion – Probable Passwords – Passwords Likely for a User
  • 17.
    Passwords as Authenticators Attackson Passwords (Cont’d) • password guessing steps: – no password – the same as the user ID. – is, or is derived from, the user's name – common word list (for example, "password," "secret," "private") plus common names and patterns (for example, "asdfg," "aaaaaa") – short college dictionary – complete English word list
  • 18.
    Passwords as Authenticators •One-Time Passwords • Biometrics: Authentication Not Using Passwords • Identification vs Authentication • Much reliable, but less effective
  • 19.
    Designing Trusted OperatingSystems • An operating system is trusted if we have confidence that it provides these four services consistently and effectively – Policy - every system can be described by its requirements: statements of what the system should do and how it should do it. – Model - designers must be confident that the proposed system will meet its requirements while protecting appropriate objects and relationships.
  • 20.
    Designing Trusted OperatingSystems – Design - designers choose a means to implement it. – Trust - trust in the system is rooted in two aspects: • FEATURES - the operating system has all the necessary functionality needed to enforce the expected security policy. • ASSURANCE - the operating system has been implemented in such a way that we have confidence it will enforce the security policy correctly and effectively.
  • 21.
    Trustworthy OS • AnOS is trusted if it provides: – Memory protection – Generation object access control – User authentication • In a consistent and effective manner. • Why trusted OS, why not secure OS?
  • 22.
  • 23.
    Security Policies Security policy:statement of the security we expect the system to enforce. • Military Security Policy – Based on protecting classified information. – Each piece of information is ranked at a particular sensitivity level, such as unclassified, restricted, confidential, secret, or top secret. – The ranks or levels form a hierarchy, and they reflect an increasing order of sensitivity
  • 24.
    Figure 1 Hierarchyof Sensitivities. Least Sensitive Military security policy
  • 25.
    Figure 2 Compartmentsand Sensitivity Levels. Compartments in a Military security plicy.
  • 26.
    Figure 3 Associationof Information and Compartments. A single piece of information may belong to multiple compartments.
  • 27.
    Terms • Information fallsunder different degrees of sensitivity: – Unclassified to top secret. – Each sensitivity is determined by a rank. E.g., unclassified has rank 0. • Need to know: enforced using compartments – E.g., a particular project may need to use information which is both top secret and secret. Solution; create a compartment to cover the information in both. – A compartment may include information across multiple sensitivity levels. • Clearance: A person seeking access to sensitive information must be cleared. Clearance is expressed as a combination: <rank; compartments>
  • 28.
    Dominance relation • Considersubject s and an object o. – s <= o if an only if: • rank_s <= rank_o and • compartments_s subset compartments_o – E,g, a subject can read an object only if: • The clearance level of the subject is at least as high as that of the information and • The subject has a need to know about all compartments for which the information is classified. • E.g.information <secret, {Sweden}> can be read by someone with clearance: <top_secret, {Sweden}> and <secret , {Sweden}> but not by <top_secret, {Crypto}>
  • 29.
    Figure 4 CommercialView of Sensitive Information. Commercial security policies What are some of the needs of a commercial policy?
  • 30.
    Example: Chinese WallPolicy Addresses needs of commercial organizations: legal, medical, investment and accounting firms. Key protection: conflict of interest. Abstractions: Objects: elementary objects such as files. Company groups: at the next level, all objects concerning a particular company are grouped together. Conflict classes: all groups of objects for competing companies are clustered together.
  • 31.
    Figure 5 ChineseWall Security Policy. Chinese Wall Security Policy
  • 32.
    Clark Wilson Model •Definestuples for every operation <userID,transformationProcedure, {CDIs…}> • userID: person who can perform the operation. • transformationProcedure: performs only certain operations depending on the data. E.g., writeACheck if the data’s integrity is mainted. • CDIs: constrained data items: data items with certain attributes. E.g., when the receiving clerk sends the delivery form to the accounting clerk… the delivery form has been already “checked” by the receiving clerk. Think of these as “stamps” of approval.
  • 33.
    Security Models While policiestell us what we want…. models tell us formally what conditions we need to enforce in order to achieve a policy. We study models for various reasons: (i) test a particular policy for completeness and consistency. (ii) Document a policy (iii) Help conceptualize and design an implementation (iv) Check whether an implementation meets its requirements.
  • 34.
    Example models (i) BellLaPadula Model: to enforce confidentiality. (ii) Biba Model: enforces integrity. To understand this, we study a structure called Lattice. lattice is a “partial - ordering of data” such that every data item has a least upper bound and the greatest lower bound. E.g., Military model is a lattice. E.g., <secret, {Sweden}> and <secret, {France}> have a least upper bound and a greatest lower bound. Figure 6 Sample Lattice.
  • 35.
    Bell LaPadula Modelfor Confidentiality Tells us “what conditions” need to be met to satisfy confidentiality to implement multi-level security policies (e.g., military policies): Consider a security system with the following properties: (i) system contains a set of subjects S. (ii) a set of objects O. (iii) each subject s in S and each object o in O has a fixed security class (C(s), C(o)).  In military security examples of class: secret, top secret etc… (iv) Security classes are ordered by <= symbol.
  • 36.
    Bell La PadulaModel for Confidentiality Properties: • Simple security property: A subject s may have read access to an object o, only if C(o) <= C(s). • (* property) - A subject s who has read access to an object o, may have write access to an object p only if C(o) <= C(p).
  • 37.
    Figure 8 Subject,Object, and Rights. Need for the two policies: Definition of subject, object and access rights. E.g., s can “r” or “read” object o.
  • 38.
    Figure 7 SecureFlow of Information. Bell LaPadula; read down, write up.
  • 39.
    Biba Model forIntegrity. Bell LaPadula is only for confidentiality, how about integrity… come up with a policy.
  • 40.
    Biba Model forIntegrity. Simple policy: Subject s can modify (write) object o only if I(s) >= I(o). Here I is similar to C, except I is called Integrity class. Integrity *-Property: If subject s has read access to object o with integrity level I(o), s can have write access to object p only if I(o) >= I(p). Why is the second policy important?
  • 41.
    Trusted OS Design •The policies tells us what we want. • The model tells us the properties needed to satisfy for the policies to succeed. • Next: designing an OS which is trusted. Trusted OS design principles- • Principle of least privilege • Economy of mechanism • Open design • Complete mediation • Permission based • Separation of privilege. • Least common mechanism • Ease of use.
  • 42.
    Review: Overview ofan Operating System’s Functions.
  • 43.
    Figure 5-11 SecurityFunctions of a Trusted Operating System.
  • 44.
    Key Features ofa Trusted OS • User identification and authentication (we already studied this). • Access control: • Mandatory • Discretionary • Role Based • Complete mediation. • Trusted path • Audit • Audit log reduction • Intrusion detection.