April 17, 2015

Mark Dixon - OracleWelcome Home Apollo 13 [Technorati links]

April 17, 2015 02:57 PM

Apollo13

Forty five years ago today, the embattled crew of Apollo 13 safely returned home. Against great odds, aided by terrific ingenuity from crews on the ground and undoubtedly by divine providence, the Apollo 13 crew survived an oxygen tank explosion and resultant failure of other systems through improvisation, steely dedication and pure grit.  

I was just finishing my junior year of high school when this occurred. Apollo 13 has been an inspiration to me ever since.

 

Photo: Astronauts James Lovell, John Swigert and Fred Haise are shown soon after their rescue still unshaven and wearing space overalls. 

OpenID.netFinal OpenID 2.0 to OpenID Connect Migration Specification Approved [Technorati links]

April 17, 2015 12:24 AM

The OpenID 2.0 to OpenID Connect Migration specification has been approved as a Final Specification by a vote of the OpenID Foundation members. A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision.

This specification defines how to migrate from OpenID 2.0 to OpenID Connect.

The voting results were:

Total votes: 32 (out of 158 members = 20.3% > 20% quorum requirement)

— Michael B. Jones – OpenID Foundation Board Secretary

April 16, 2015

CourionIntelligent IAM: Improving Governance Processes [Technorati links]

April 16, 2015 01:40 PM

Access Risk Management Blog | Courion

describe the imageThis is the second installment in a 3-part series that explores how intelligence improves identity & access management or IAM. In part 1, we looked at how intelligence improves the provisioning portion of IAM, which helps to ensure that the right people are getting the right access to the right resources.  In this section, we’ll look at how intelligence improves the governance portion of IAM, with a focus on validating that the right people currently have the right access to the right resources.

Governance is a verification process, essentially the QA portion of IAM. Many organizations use a manual certification process to verify access, which is essentially a large report that provides a list of users along with their associated access. The certification itself may be a paper-based tool or an electronic tool like Excel. Regardless of the medium, the process is essentially the same and the expectation is that reviewers will look at each user/access assignment and make an informed decision as to whether or not the granted access is appropriate. Depending upon company size, an average reviewer may be responsible for hundreds if not thousands of decisions.  That sounds like fun, right?  In addition to the fact that a certification is a lengthy, time-consuming process, it is also a mind-numbing exercise. It’s no wonder certifications are relegated to an annual or perhaps a semi-annual punishment; pity the folks who tackle this on a quarterly basis. I wonder if anyone has ever collected any statistics that indicate a causal relationship between the scheduling of a company-wide certification and requested vacation days.

So, why do certifications at all? As painful as they may be, certifications serve an important security function; at least that’s the intent – your mileage may differ. If you think of access to corporate resources as being somewhat analogous to having a set of keys to your house, don’t you want to make sure you have tight control over who has a set of keys? Since the provisioning process incorporates a robust approval process, then why do we need to do periodic certifications on the back end? Haven’t we already ensured that the access assignments are appropriate on the front end? Well, yes and no, but mostly no.

You’ve heard the adage, “the only constant in this world is change.” Well, the average corporate environment exemplifies that sentiment. Corporations are dynamic entities. Corporate resources are often being added to or removed from the environment and the data that resides on those resources is constantly changing. Arguably, the most dynamic aspect of a corporation is the human resource component; employees come and go, they join and leave projects, change jobs and/or change departments. In addition, there are often contractors or temporary personnel, which adds another wrinkle to the situation. The limitation of verifying access only during provisioning is the fact that decisions are made in the moment, based upon one’s knowledge of the circumstances that exist at that point in time.

However, as discussed above, circumstances change over time and a decision that was appropriate last year, last month or even yesterday may not be appropriate today. Therefore, a governance process is necessary in order to ensure that access assignments remain appropriate within a dynamic environment. In addition, the governance process must be thoughtfully executed in order to achieve its goal. Unfortunately, a governance process, devoid of intelligence, tends to devolve into a rubber-stamp exercise. Asking a reviewer to make decisions upon hundreds or thousands of access assignments that all feel similar in importance coupled with a reviewer’s tendency to believe that the access assignments are probably already correct isn’t a recipe for a strong governance cycle.

By contrast, IAM intelligence in the form of data analytics can make dramatic improvements to the governance process. Envision a certification that is no longer a flat list, but instead organized into sections based upon the degree of attention required of a reviewer. One section may contain all of the access a user has that is in complete alignment with the user’s job title or equivalent to access provided to colleagues. This section probably needs little more than a cursory review.

However, another section may contain all of the resources that have been identified as highly sensitive, and a user having access to these resources requires a greater degree of scrutiny by a reviewer. Yet another section may identify access assignments that the intelligence engine, based upon configurable policies that reflect a corporation’s business policies, has flagged as being questionable.

One such example is outlier access, which may be defined as an access assignment that differs by some degree from access that is held by a user’s cohort group, such as others with the same job title or others in the same department. Such an intelligence-driven certification would focus a reviewer’s attention on those items that matter most, perhaps even requiring multi-level certification based upon the sensitivity of the resource or the degree to which the access is an outlier.outlier access for Intelligent Governance Blog   FINAL

Perhaps the most attractive aspect of intelligence-driven certifications is the potential to eliminate the need for an all-encompassing review altogether. Since the use of intelligence can segment access assignments into different groups based upon configurable criteria, why not use that intelligence as the basis for determining which access should be reviewed on an as needed basis? Sensitive resources can be reviewed on a monthly basis. Outlier access can be reviewed as soon as it is detected and the access can be removed immediately or approved for a given amount of time based upon configurable boundaries.

Intelligence-driven governance is a game-changer; identifying and organizing access assignments into questions that focus reviewers’ attention on those things that matter most to the business. The use of intelligence changes the question from “Are all of these access assignments appropriate?” to questions like, “Should Bob have access to this server when he is the only one in the department with such access?”, “Sue has access to this file share just like all of her colleagues, but she’s the only one accessing it on the weekends, is that appropriate?” or “This resource has been identified as a highly-sensitive resource and average utilization of this resource has increased over the past week; in particular, Joe & Fred have shown a 200% increase for this resource, is that appropriate?

In addition to the fact that the governance process can evolve from a high-level check to very specific queries, the addition of intelligence ensures that these specific questions are asked at the time the events are happening, such that anomalies can be addressed immediately before they become a catastrophe.

In my final installment of this 3-part series, we’ll focus on the use of intelligence as a means to reduce risk.

blog.courion.com

Mark Dixon - OracleHonoring Jackie Robinson in Space [Technorati links]

April 16, 2015 03:53 AM

NASA astronaut Terry Virts, wearing a replica Jackie Robinson jersey in the cupola of the orbiting International Space Station, is celebrating Jackie Robinson Day, April 15, with a weightless baseball.

SpaceBall2

April 15th (Baseball’s opening day in 1947) has now come to commemorate Jackie Robinson’s memorable career and his place in history as the first black major league baseball player in the modern era. He made history with the Brooklyn Dodgers (now the Los Angeles Dodgers) and was inducted to the Baseball Hall of Fame in 1962.

Congratulations, Jackie, for your courage!  Thank you, Terry, for a memorable celebration!

Mark Dixon - OracleVersion 2015 Data Breach Investigations Report [Technorati links]

April 16, 2015 03:25 AM

Verizon2015DBIR

The new Verizon 2015 Data Breach Investigations Report has been published.

It is interesting to note … 

The year 2014 saw the term “data breach” become part of the broader public vernacular, with The New York Times devoting more than 700 articles related to data breaches, versus fewer than 125 the previous year.

And there are undoubtedly more to come. Consider one of the scariest charts in the report:

[The chart] contrasts how often attackers are able to compromise a victim in days or less (orange line) with how often defenders detect compromises within that same time frame (teal line). Unfortunately, the proportion of breaches discovered within days still falls well below that of time to compromise. Even worse, the two lines are diverging over the last decade, indicating a growing “detection deficit” between attackers and defenders.”

VerizonChart01

Enjoy the read! We in the information security industry have a lot of work to do.

April 15, 2015

KatasoftNew Node.js Release: User Management & Authentication for Loopback [Technorati links]

April 15, 2015 03:00 PM

Loopback Authentication

If you’ve been building Node.js applications for a while, you’ve likely heard of Loopback — it’s a very popular Node.js framework for building API services.

I’m a huge fan of Loopback, as I’ve found it a really quick and convenient way to build REST APIs quickly in the past.

With our brand new loopback-stormpath library, you can now use all of Stormpath’s amazing user management tools to secure your user data, easily manage your users via our clean web interface, and scale your Loopback APIs to infinityyyyy and beyond!

Loopback it has tons of really nice integrations:

The goal for loopback-stormpath is to extend Loopback with Stormpath’s user and API authentication and other features, so all Loopback developers can have robust authentication and user functionality in only a few minutes. This release is the first step.

How The Loopback-Stormpath User Model Works

While Loopback provides a built-in user model that you can use to create / store / manage users in your own user database, our new loopback-stormpath library offloads your user store to Stormpath, which gives you a number of additional user management features and benefits:

Getting Started with Loopback-Stormpath

Using the new loopback-stormpath library is quite simple. Here’s what you need to do after creating a new Loopback project.

Firstly, install the library via npm:

$ npm install loopback-stormpath --save

Secondly, modify your server/server.js file, and add the following import at the top:

var stormpath = require('loopback-stormpath');

Next, in your server/server.js file, directly before the boot(app, __dirname); line, add the following:

// Initialize Stormpath.
stormpath.init(app);

This will initialize Stormpath’s middleware properly.

After that, you need to modify your server/model-config.json file and do the following:

Lastly, open up server/datasources.json and add the following:

"stormpath": {
  "name: "stormpath",
  "connector": "stormpath",
  "apiKeyId": "xxx",
  "apiKeySecret": "xxx",
  "applicationHref": "xxx"
}

You’ll need to fill in the three bottom values with your Stormpath credentials. If you don’t already have a Stormpath account, go make one! https://api.stormpath.com/register

That’s it! You’ve now got Stormpath installed and configured properly, so you can manage your users as you would with any other Stormpath-backed application.

Upcoming Features for loopback-stormpath

Right now, this is an early alpha release. This means that it’s brand new, and might contain bugs!

If you find any bugs or issues, please file them on the Github repo! We love bug reports as well as feature requests.

Over the next few months, we will improve the integration so it supports all ORM features, as well as build out new library documentation, and include first-class front-end support.

There’s a LOT more to come in the future, so please go check it out now and let me know what you think so far!

Also! I just wanted to give a huge shout out to everyone at Strongloop who helped me with this along the way. Big props to the entire Strongloop team — everyone there was amazingly helpful =)

Happy Hacking,

-Randall

April 14, 2015

Rakesh RadhakrishnanDominant Defensive Design Principles 4 [Technorati links]

April 14, 2015 02:55 AM
 All IAAS stack in a private cloud or a public cloud is hosted in a Network (Data Center Network). While a full blown network security design and architecture will consist of network to network (NNI) interface designs like GRE/L2TP, remote LAN designs and remote network access and more., the focus on these defensive designs are around Data Center networking that acts as the host for IAAS. Fundamentally the dominant design principles are "defense in depth" from a perimeter protection perspective and a "Context driven Admission Control" from a Data Center (internal routing) perspective. The 9 layers or 9 steps to securing the perimeter that leverages the defense in depth principles includes;

Step 1  Design Time – End to End Vulnerability Scans and Secure Configurations
Step 2  End to end Pen Tests and Tuning based on Test Results
Step 3  Stealth Mode – Sniffer and Scrubber for Network DDOS mitigation (can be a Sec AAS)
Step 4  External Firewall –port packet and protocol level filters and DMZ rules
Step 5   Network IDS and IPS – between External and Internal Firewalls for Intrusions
Step 6    Internal Firewall setting up L3 (ASA/Anyconnect) L5 VPN’s (TLS) after device validation
Step 7  Network Services DDOS mitigation – NTP, DHCP, etc.
Step 8   Identity and Application aware (Trust TAGS) based Data Center routing (see risk based routing in next 3 slides)
Step 9   SIEM and continuous internal integrity checks and egress web proxy

Fundamentally the integrity of the packets keeps increasing (ingress) as it passes one step after the other. Step 1 end to end vulnerability scans for example might seem like a design time function, however with vendors offering on premises and Secure SAAS VM tools, it gets to become a continuous process. So is network penetration testing. Network DDOS is typically a Security as a Service offering today, that the cloud DC operator has to offer
at a minimum.

The 2nd dominant principle is this Context Aware Next Generation Admission controls within the data center. Each subjects User Context, Device Context, Access Network context (wifi, wired LAN, location, etc.), are all taken into account when admission into specific VM's in VLAN's are managed - by an Identity and Risk
aware engine. This today must be a standard offering of cloud IAAS vendors and their respective data centers.

This type of an approach is also discussed in a Cisco Arbor BYOD security paper. What's also critical to understand is the extent to which these network facing security systems integrate with one another. This includes MFA (multifactor authentication platforms) like SecureAuth with Cisco ASA and Anyconnect, RSA 2FA with SecureAuth (as one 2FA mechanism), Cisco ASA/Anyconnect with Citrix Netscalar (for VDI and TLS level VPN into VDI), all these contexts carried over to Cisco ISE, and more.
April 12, 2015

Rakesh RadhakrishnanDominant Defensive Design Principles 3 [Technorati links]

April 12, 2015 11:52 PM
Once you have had apps developed with Secure by Design principles and data objects with privacy baked in principles, you are looking at hosting it in a Private or a Public IAAS or a hybrid (example private cloud for production and public cloud for DR). The two primary principles here is "Identity in the STACK" and leveraging the abstraction between the hypervisor and the virtual machines.

A sample end to end flow of security processes in an IAAS model - such as OpenStack and Cloudfoundry includes;

Step 1    Hypervisor’s goes through the process of IDS/IPS (intrusion tolerance) at Boot time (hypervisor accessible only via a segregated interface to the control networks)
Step 2   When Linux and Windows OS boots up as a VM on the hypervisor appropriate malware/virus checks are complete
Step 3  Virtual Machines (on ESX) are controlled with NSX like firewalls for protocol, communication, processes and more (secure software defined networking)
Step 4  All privileged access management to OS is handled via a Command Control firewall – that handles RBAC and XACML (like beyond trust)
Step 5   All 4 layers report log data for FIPS forensics (traceability) and are identity and policy aware
Step 6     All privileged access management to DBMS is handled via a DB firewall – that handles RBAC and XACML
Step 7  All privileged access is routed to specific network segment (control plane) via Cisco ISE like solutions (for end to end id /policy in the stack)
 

The 5 levels of maturity one can attain in the IAAS Security space are;

Level 1 :  Silos of Layered Malware Firewalls – Hypervisor centric, OS/VM centric, JVM centric, Network centric, DB FW, etc.
Level 2 :  Integrate Firewalls – example SQL injection or command injection based WL and BL across firewalls –cross coordination
Level 3 :  Fine Grained command level AC at OS and DBMS (privileged administrators)
Level 4:  Identity integrated into tack for end to end forensics
Level 5:  Comprehensive and Consistent automated polices – end to end auditable – globally and continually optimized at the infrastructure layers


While the Virtual machines that run the apps and the data processing for the Secure SAAS app, its addressable interfaces are segregated to its own NIC and path that are typically NAT'ed to a public resolvable IP address. There typically will be no path for advanced threats to permeate into a Hypervisor layer via these applications. Everything from the VM and above as a STACK can potentially be Self Cleansable - when they are designed as short lived and stateless services.
 
All privileged administrative tasks can take a different path and will comply with ISO 27002, security system for Data Center operations and administration.  They need not take a Cisco ISE like NG NAC path and can involve lower level networking (such as L2TP).

 

Rakesh RadhakrishnanDominant Defensive Design Principles 5 [Technorati links]

April 12, 2015 11:51 PM
End point the entry points today have robust technologies to embrace these ideas around - Zero Footprint and Stateless devices and password/cookie free designs.

One can loose a device - yet no data is lost, no security sensitive apps reside in it (zero foot print and stateless) and there are no end point session cookies to hijack a session (dumb display devices or smart display devices).

A sample end to end security process flow includes;


Step 1    Device Provision time (SIM, IMIE, SW, etc.)
Step 2   Authentication (FIDO based & SAMLprofiles)
Step 3    Isolation(end point shim)+VPN layer, Location, Secure Browser, RDP, IPsec, etc.
Step 4     MGW/STS OAUTH Tokens for Native App SSO
Step 5    Mobile DLP (mobile content protection) –via ICAP
Step 6     Inbound and Outbound URI validation against malware (ICAP)
Step 7     Mobile APT (client side and server side)
Step 8    Run Time Mobile Apps in Data Center
Step 9    ISE generates posture (suspect, quarantine and good)

and the respective maturity levels measured by;

Level 1 :  Basis MDM and Mobile Malware protection – access ONLY to trivial services
Level 2 :  Mobile MFA (FIDO) with Mobile SSO (for native Mobile Apps using OAUTH API)
Level 3 :  Mobile end point posture based Network Admission Controls (Mobile and VPN Layer and network context (within Enterprise Ethernet LAN, International Locations, WifiGuest LAN, etc.).
Level 4:  Mobile Isolation (SHIM) driven VPN –that leverages 3 and 4 + RDP (secure remote desktop)
Level 5:  Comprehensive and Consistent integrated control – end to end auditable – globally and continually optimized




75% of advanced malware is caused by leveraging password (a credential) as an attack vector. With FIDO alliances work and IAPP's efforts in a Cloud Model and a BYOD world - we have no choice but to move towards recognition and strong multi factors recognitions as the way to move forward.

Rakesh RadhakrishnanDominant Defensive Design Principles [Technorati links]

April 12, 2015 11:03 PM
For 10 years (2001-2010) I've worked on integrating IAM (Identity & Access Management) into different spectrum of security tools (at Sun/Oracle) both internal and partner solutions. Now for 5 years I've had the privilege of seeing enterprise wide patterns in Security Designs (in Banking, Healthcare, etc.). What I see as dominant defensive design principles are described below and followed up with one blog in each area.

Core Data Layer: The notion of Privacy Baked in and Intrusion Tolerant Data should be the norm today. Its the Achilles Heel. Mature organizations know what their security sensitive data is and where it is residing. Data need not be decrypted at any point in time (collection, transmission, storage, etc.)., and even when decrypted in memory at process time - only Trusted Execution (by processes that are trusted by an execution engine). Security Sensitive Data collection is minimized and de-identification is the norm. Policies are embedded within the Data Objects, and data purging and data movement rules must allow for Intrusion Tolerance (Storage are network disconnected from a Data Center where intrusions are detected). DDS (data defined storage) plus Big Data Technologies can be used for such movements.

Typical Processes involves:

Step 1    end point UI Data Collection Integrity checks
Step 2   end point Device DLP
Step 3  Data integrity during data transmission (VPN and Message/XML sec)
Step 4   Data validation by an XML Firewall (parameter validations)
Step 5   Cloud Data Tokenization (for inbound and outbound with FPE and FHE)
Step 6  IRM for Data in Documents
Step 7   DRM for Data in Media files
Step 8  DB Firewall for Data in RDBMS and non structured DBMS
Step 9   XACML policies for Tagged data (PII, PCI and PHI)

The 5 levels of Maturity typically in this space are:

Level 1 :  non externalized –built in legacy entitlement code within applications, some basic RBAC
Level 2 :  Externalized ABAC policies (in respective application -roles- and DB entitlement -tags- FW)
Level 3 :  Externalized ABAC (context) policies augmented with Risk IN
Level 4:  All risk sensitive applications with security or compliance sensitive data (PII, PCI, PHI, etc.) use Externalized Risk based Entitlement augmented with Password Free, Cookie Free, Stateless, Zero Footprint clients code
Level 5:  Comprehensive and Consistent automated polices – end to end auditable – globally and continually optimized  (Continuous Loop of Entitlement with SIEM –risk intelligence, threat intelligence based polices – STIX and XACML)

Rakesh RadhakrishnanDominant Defensive Design Principles 2 [Technorati links]

April 12, 2015 06:35 PM
The second area are Applications (web applications, web services, REST service, etc.). Majority of the defensive design mechanisms are built in at Design and Development time and validated with Vulnerability testing and Penetration testing tools (such as HP Web Inspect). For example as part of the application code if there is NO Data Abstraction Layer and frequent insecure direct object references are made, there is no point in inserting a run time application firewall. The design has to accommodate a DAL and as part of the penetration testing an iDOR in code has to be removed. These types of applications are also STATELESS (REST API's) and hence can accommodate self cleansing code (using SCIT like technologies). 
At run time typical mitigation controls includes;

Step 1    F5 Big IP like IP LB  terminates TLS/SSL and un encrypts payload
Step 2   F5 ASM like OWASP FW inspects payload (html, scripts, xml, json, sql, soap, etc)
Step 3    Layer7 like XML/API FW inspects XML payload for conformance, schema validation and API 
Step 4     Layer 7 interacts with site minder to establish a SM session based on SAML assertion
Step 5    Layer 7 interacts with Axiomatic like FGES to make Authorization calls for RBAC
Step 6     Validated XML payload sent to Apache Web server
Step 7     Apache Web server processes request and sends to REST API in web logic
Step 8    Web logic Web plus REST application executions
Step 9    Web Logic makes Authorization calls to Axiomatic (ABAC and method level)
Step10    Processed XML sent to JMS
Step 11    back end Business Process retrieves XML message
Step 12    back end Business Process logic executes
Step 13     Back end business process publishes XML message
Step 14    JMS de queue XML message
Step 15     Processed XML re-used by REST calls
Step 16     Apache Web server sends message to f5 ASM via layer7 for validation
Step 17    F5 Big IP encrypts outbound message
Step 18      DAL converts XML to SQL to store in DB for audit
Step 19      Consolidated and stored in DB repository including logs
Step 20    DB firewall leveraged for Masking purposes of PII
Step 21   The TLS payload is also shipped to the SecSAASvendor for APT/ATP (from a sensor)
Step 22   Sec SAAS vendor inspects the code for advanced malware (Bots, Botnets, C&C, APT, etc.)
Step 23   If IOC is detected the respective OWASP/WAF FW, XML/API FW and DB FW are notified
Step 24   Short Lived, Stateless, Self Cleansing Web Containers (code integrity)
Step 25   Short Lived, Stateless, Self Cleansing App Containers (code integrity)


 
 This type of an SAAS Security Designs are well aligned to the idea of a "Password FREE, Cookie FREE, Zero Footprint, Stateless End points". Everything from the client UI code and API code is residing in the SAAS space and delivered post security checks (role based UI rendering for example). Steps 21 to 25 is critical for security sensitive apps that are also targets of advanced threats and can leverage a Virtual Execution environment and self cleansing application containers.

KatasoftHow to Create and verify JWTs in Java [Technorati links]

April 12, 2015 03:00 PM

Java support for JWT (JSON Web Tokens) is in its infancy – the prevalent libraries can require customization around unresolved dependencies and pages of code to assemble a simple JWT.

We recently released an open-source library for JWTs in Java. JJWT aims to be the easiest to use and understand library for creating and verifying JSON Web Tokens (JWTs) on the JVM.

JJWT is a ‘clean room’ implementation based solely on the JWT, JWS, JWE and JWA RFC draft specifications. According to one user on stack overflow, its “Simple, easy and clean, and worked immediately.” This post will show you how to use it, so any java app can generate, encrypt and decrypt JWTs without much hassle.

What Are JWTs?

JWTs are an encoded representation of a JSON object. The JSON object consists of zero or more name/value pairs, where the names are strings and the values are arbitrary JSON values. JWT is useful to send such information in the clear (for example in an URL) while it can still be trusted to be unreadable (i.e. encrypted), unmodifiable (i.e. signed) and url-safe (i.e. Base64 encoded).

Want to learn more? You can check one of our previous posts and the JWT spec.

JWTs can have different usages: authentication mechanism, url-safe encoding, securely sharing private data, interoperability, data expiration, etc. Regardless of how you will use your JWT, the mechanisms to construct and verify it are the same. So, let’s see how we can very easily achieve that with the JSON Web Token for Java project

Generate Tokens

java
import javax.crypto.spec.SecretKeySpec;
import javax.xml.bind.DatatypeConverter;
import java.security.Key;
import io.jsonwebtoken.*;
import java.util.Date;    

//Sample method to construct a JWT

private String createJWT(String id, String issuer, String subject, long ttlMillis) {

//The JWT signature algorithm we will be using to sign the token
SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.HS256;

long nowMillis = System.currentTimeMillis();
Date now = new Date(nowMillis);

//We will sign our JWT with our ApiKey secret
byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(apiKey.getSecret());
Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName());

  //Let's set the JWT Claims
JwtBuilder builder = Jwts.builder().setId(id)
                                .setIssuedAt(now)
                                .setSubject(subject)
                                .setIssuer(issuer)
                                .signWith(signatureAlgorithm, signingKey);

 //if it has been specified, let's add the expiration
if (ttlMillis >= 0) {
    long expMillis = nowMillis + ttlMillis;
    Date exp = new Date(expMillis);
    builder.setExpiration(exp);
}

 //Builds the JWT and serializes it to a compact, URL-safe string
return builder.compact();
}

Decode and Verify Tokens

java
import javax.xml.bind.DatatypeConverter;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.Claims;

//Sample method to validate and read the JWT
private void parseJWT(String jwt) {
//This line will throw an exception if it is not a signed JWS (as expected)
Claims claims = Jwts.parser()         
   .setSigningKey(DatatypeConverter.parseBase64Binary(apiKey.getSecret()))
   .parseClaimsJws(jwt).getBody();
System.out.println("ID: " + claims.getId());
System.out.println("Subject: " + claims.getSubject());
System.out.println("Issuer: " + claims.getIssuer());
System.out.println("Expiration: " + claims.getExpiration());
}

To sum it up…

We tried to make it very easy to both construct and verify JWTs using JSON Web Token for Java. You only need to specify the data you want to encode and sign it with a key. Later, with that same key you can verify the authenticity of the token and decode it. The benefits of using JWT greatly exceed the time and effort of implementing them. Give it a try and you will have a hassle-free and more secure application.

Last but not least, do not forget to use SSL when communicating with remote peers since the token will be travelling over the wire on every request.

Please leave your comments below and check out our new Spring Boot Support and Java Servlet Support. We’re investing heavily in making authentication, user management, and single sign-on across Java applications easy, free and secure. You can read more about Stormpath user management for the JVM.

Mike Jones - Microsoft10 Years of Digital Identity! [Technorati links]

April 12, 2015 02:19 AM

How time flies! In March 2005 I began working on digital identity. This has by far been the most satisfying phase of my career, both because of the great people I’m working with, and because we’re solving real problems together.

An interesting thing about digital identity is that, by definition, it’s not a problem that any one company can solve, no matter how great their technology is. For digital identity to be “solved”, the solution has to be broadly adopted, or else people will continue having different experiences at different sites and applications. Solving digital identity requires ubiquitously adopted identity standards. Part of the fun and the challenge is making that happen.

Microsoft gets this, backs our work together, and understands that when its identity products work well with others that our customers and partners choose to use, we all win. Very cool.

Those who of you who’ve shared the journey with me have experienced lots of highs and lows. Technologies that have been part of the journey have included Information Cards, SAML, OpenID 2.0, OAuth 2.0, JSON Web Tokens (JWTs), JSON Web Signing and Encryption (JOSE), and OpenID Connect. Work has been done in OASIS, the Information Card Foundation, the OpenID Foundation, the Open Identity Exchange (OIX), the Liberty Alliance, the IETF, the W3C, the FIDO Alliance, and especially lots of places where the right people chose to get together, collaborate, and made good things happen – particularly the Internet Identity Workshop.

It’s worth noting that this past week the Internet Identity Workshop held its 20th meeting. They’ve been held like clockwork every spring and fall for the past 10 years, providing an indispensable, irreplaceable venue for identity practitioners to come together and get things done. My past 10 years wouldn’t have been remotely the same without the past 10 years of IIW. My sincerest thanks to Phil, Doc, and Kaliya for making it happen!

I won’t try to name all the great people I’ve worked with and am working with because no matter how many I list, I’d be leaving more out. You know who you are!

While we’re all busy solving problems together and we know there’s so much more to do, it’s occasionally good to step back and reflect upon the value of the journey. As Don Thibeau recently observed when thanking Phil Windley for 10 years of IIW, “these are the good old days”.

April 09, 2015

Nishant Kaushik - OracleBuilding the Self Defending Enterprise [Technorati links]

April 09, 2015 01:24 PM

Algorithms. Algorithms. Algorithms.

If Steve Ballmer were still running the show at Microsoft, I’m pretty sure that would have been his chant at the next conference. The abundance of data being generated, collected and analyzed now is so vast that it has been a completely logical progression to move away from human analysis to algorithmic analysis in this “big data era”. Data science is hot, and its methods and mind set have already transformed the advertising, retail and media industries – all in pursuit of the noble goal of improving the odds of making a sale through targeted marketing and recommendations. However (queasy) you may feel about that, it is an undeniable fact that many industries are moving towards automated decision-making which takes humans out of the equation and promises better outcomes based on data and analysis.

RiseOfMLAlgo

So what does this mean for identity management and security? I’ve been exploring this ever since I gave my talk at the 2014 Cloud Identity Summit. The history of the security industry is littered with failed products built on the promise of expert systems. But I believe we are at a convergence point; we now have an ever growing mountain of data available for analysis, while machine learning and other data science methodologies have improved significantly in both capability and performance. The result: security solutions that have the ability to dynamically identify, report and even remediate issues which the vendor and operator didn’t need to foresee and create predefined policies or conditions for. And while the military may be leveraging this to build what amounts to a cyber-Skynet, it is also driving real innovation in the areas of enterprise and online security. Security automation is creating solutions that go beyond simply enforcing your defenses, and actually dynamically define them. 

Security automation is just one of a few factors that are helping create a security blueprint for what I have coined ‘The Self-Defending Enterprise’. Not a terribly original moniker, I know, but one that has a nice ring to it as it speaks to both a pressing need and an emerging capability. In a borderless IT environment where threat vectors continuously shift, evolve and multiply, we cannot rely on security models that are network-based, prescriptive and hardened. This brave new world needs bold new solutions.

I’ll be expanding on the model and these other factors in the coming months. Some of this has been driving the work I’ve been doing in my day job (which has kept me away from my real day job of engaging in twitter banter with Paul and Brian). And with RSA Conference happening in San Francisco in a little over a week (I’ll be there along with other folks from CA – check out details of our presence there), there should be ample opportunity to discuss this and see different vendors whose solutions are changing the landscape. So stay tuned for my twitter commentary on location; and as usual, ping me if you’d like to meet up.

Algo-Cartoon

The post Building the Self Defending Enterprise appeared first on Talking Identity | Nishant Kaushik's Look at the World of Identity Management.

April 07, 2015

Kantara InitiativeSpotlight on Kantara’s Member “MedAllies” [Technorati links]

April 07, 2015 06:27 PM

In this edition of Spotlight, we are pleased to tell readers more about MedAllies, their unique role in IdM, and why they became Members of Kantara Initiative.  mem-MedAllies

1) Why was your service/product created, and how is it providing real-world value today?

MedAllies, founded in 2001, has extensive experience with clinician usage of health IT and interoperability. MedAllies was formed as a health Information technology adoption company. Over fifteen years ago, our earliest MedAllies work was with a third party vendor to build a basic referral/consult module as part of an initial Health Information Exchange (HIE) initiative which evolved to a robust regional exchange offering basic interoperability.

As part of the growth of the initiative, MedAllies implemented certified 3rd party Electronic Health Records (EHRs) to digitize ambulatory practices optimizing not only the workflow, but also the technology supporting the workflow. MedAllies realized that incorporating interoperability into the clinician’s workflow was the key to adoption and usage. Many of the previous efforts relied on clinicians working outside of their EHR application.

MedAllies started offering Direct Solutions™ which builds on existing technology to achieve interoperability. As one of the ONC Direct Reference Implementation vendors, MedAllies focused on EHR interoperability for Transitions Of Care (TOC). As an experienced EHR implementation company, MedAllies knew that as patients transitioned across care settings, poor communication hand-offs resulted in patient adverse events, particularly from medication errors.

In addition to MedAllies expertise in Direct, MedAllies has experience in practice/community transformation and Patient Centered Medical Home (PCMH). We know from initial research results, there is a very significant positive impact on patient care which has led to continued efforts including advanced payment models to support and continue these initiatives to properly align incentives.
2) Where is your organization envisioned to be strategically in the next 5-10 years?

MedAllies is able to understand the needs of clinicians and bring interoperability tools at the point of care that assist clinicians in their efforts to improve the care of patient. Our efforts that initially grew out of the Hudson Valley of NY have allowed us to bring our experience to other communities nationwide. MedAllies will continue to utilize our decades of technology and clinical experience to ensure transitions of care and communication among healthcare organizations ensures smooth and error free transitions. Our vision is that as patients transition among organizations, there will be seamless moving of pertinent healthcare data that travels on Accredited networks so when patients arrive for care, the provide has their critical health information.
3) Why did you join Kantara Initiative?

MedAllies joined the Kantara Initiative to support identity interoperability, specifically in the healthcare space. As an operator of a national healthcare information network, MedAllies understands the importance of reliable and secure digital identities involved with the exchange of clinical information. The Kantara Initiative provides the programs and guidance necessary for the identity management solutions needed for the healthcare sector.
4) What else should we know about your organization, the service/product, or even your own experiences?
MedAllies has a full suite of services which include not only leveraging technology, but also an understanding of all aspects of a connected community including the importance of privacy and security. MedAllies employs a team of individuals with certified backgrounds in Health Information Management (HIM), IT, and Security.

MedAllies’ communities have been studied and peer-reviewed research has demonstrated the positive impact of full adoption and usage can have on patient care. Research about our region can be found at: http://www.taconicipa.com/health-it_1.html.

We want to achieve the goal of arriving at your hospital or providers organization and NOT be asked, “could you please fill out these forms and have a seat?”.

Rakesh RadhakrishnanTrending Towards Threat based Access Controls [Technorati links]

April 07, 2015 02:51 PM
I blogged about the opportunity large IT companies had a few years back (IBM, Cisco, Oracle, Microsoft, etc.) around supporting XACML natively or XACML expressions of policies for import and export. I recently blogged about STIX IOC as an input for dynamically generated policies as well, across these tiers. Today majority of the SAAS apps are REST API based with a JSON or XML construct for data exchange. These types of application designs offer the opportunity to deliver device based and role based UI rendering, role based and attribute based access to application logic, TAG and Metadata based access to data in databases, and more. ACL, CSV, RBAC, ABAC and TagBAC all of which can be expressed in XACML.

I am delighted to see IBM taking the lead in terms of supporting XACML in;

a) Secure Access Manager/Gateway for Mobile end points, that expresses these policies in XACML.
b) Embedded XACML PEP/PDP in DataPower XML/API Gateway (& Data Tokenization)
c) XACML based PDP for Application's fine grained Access Controls -Tivoli PM
d) XACML support in IBM Guardium DB firewall for extraction, access and exception policies
e) potential support for XACML in Secure Network Protection (NG context admission controls)

If a SIEM tool (like Splunk or QRadar) can pass STIX IOC to these XACML products.. we get true dynamic threat intelligence based defense.. This approach infuses NEW LIFE into XACML which has inherent capabilities to be a dynamic policy automation language as the policies themselves are auto generated (based on meta-data and relevant policy combinations). The only distributed policy model to support such dynamism.

Yeah RIGHT - XACML is Dead...  What a foolish controversy and debate !

I did a keynote last year at a CISO event calling all CISO's to demand such standards from the vendors ! Wake up CISO world !  Kudo's to IBM for all the efforts in this direction !

The Threat Intelligence driven STIX IOC (Incident of Compromise) can be around an end point IP address, Server/VM IP address, End Point Device posture or a Server/VM posture or an XMLobject or a SQLstatement, or an URI/URL or a specific API, and that IOC can act as the Intelligence Data to respond accordingly with policy changes.

This is not rocket science - similar to the DLP-NAC XACML profiles you can expect to see STIX IOC XACML profile specifications from OASIS in 2015 !


Mike Jones - MicrosoftOpenID Connect working group presentation at April 6, 2015 OpenID workshop [Technorati links]

April 07, 2015 04:10 AM

OpenID logoI’ve posted the OpenID Connect working group presentation that I gave at the April 6, 2015 OpenID Workshop. It covers the current specification approval votes for the OpenID 2.0 to OpenID Connect Migration and OAuth 2.0 Form Post Response Mode specifications, the status of the session management/logout specifications, and OpenID Connect Certification. It’s available as PowerPoint and PDF.

April 06, 2015

OpenID.netVote to approve final OAuth 2.0 Form Post Response Mode specification [Technorati links]

April 06, 2015 09:38 PM

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Final Specification:

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision.

The official voting period will be between Friday, April 17th and Friday, April 24th, 2015. For the convenience of members, voting actually opened on Monday, April 6th for members who have completed their reviews by then, with the voting period still ending on Friday, April 24th. Vote now at https://openid.net/foundation/members/polls/96.

Voting to approve the OpenID 2.0 to OpenID Connect Migration 1.0 specification is also open at https://openid.net/foundation/members/polls/91 through April 9th.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.

A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/.

– Michael B. Jones, OpenID Foundation Secretary

CourionMike Rothman of Securosis to Keynote CONVERGE on May 20th [Technorati links]

April 06, 2015 06:06 PM

Access Risk Management Blog | Courion

CONVERGE 2015There are so many reasons to join us at CONVERGE May 19 - 21.  And now we are happy to announce yet one more: Mike Rothman, President of the analyst firm Securosis, will join us on Thursday May 21st to discuss “The Future of Security”.

Mike specializes in what he irreverently describes as the “sexy” aspMike Rothman Securoisects of security, like protecting networks and endpoints, security management, and compliance. Mike will bring his “cynicism about the state of security and what it takes to survive as a security professional” to his session at CONVERGE 2015. Don’t miss it.

Sign-up before April 30 to save $100 on your registration fee.

Here are ten more reasons to join us:

1. Mix and mingle with fellow Courion customers

2. Learn what’s next in the Courion Access Assurance Suite

3. Take advantage of Tech Tuesday, a full day of deep dive technical workshops, and become an IGA ninja!

4. Network with industry IS peers in our popular ‘Birds of a Feather’ sessions on Wednesday May 20th

5. Laugh during a set of “application-specific comedy” with Don McMillan of Technically Funny on Wednesday May 20th at 4:00 p.m.

6. Meet new members of the Courion executive team

7. Earn 15 hours of continuing professional education (CPE) credits good towards maintaining professional certifications such as CISSP or CISM

8. Connect with solution partners such as IDMWorks, Ping Identity, Lieberman Software, Radiant Logic, SecZetta and Secure Reset.

9. Learn about Courion customer experiences firsthand in case studies and a special customer panel on intelligence

10. Enjoy fabulous food and exciting entertainment. It’s Vegas–need we say more?

    See you there!

    blog.courion.com

    Vittorio Bertocci - MicrosoftADAL Plugin for Apache Cordova: Deep Dive [Technorati links]

    April 06, 2015 02:00 PM

    I am super happy to finally be able to talk about this! Today our friends in MS open Tech are releasing the first developer preview of an Apache Cordova plugin for ADAL, the result of few months of merry collaboration between out teams. This plugin will be yet another arrow in your developer quiver for adding the power of Azure AD to your multi-platform applications – namely, the ones targeting iOS, Android, Windows Store and Windows Phone. You can find the announcement posts here and here. And if you want to get started quickly, instead of sifting through all my words salad below, head straight to the sample repo and follow the detailed readme! In this post I am going to dig a bit deeper on the role that the plugin plays in the ADAL franchise, mention intended usage and take a peek under the hood of the plugin itself.

    No compromises: JavaScript apps with the power of native ADALs

    image

    Since we announced ADAL JS, we had a constant stream of questions about using it in Cordova applications: how to do it, why it was not optimized for that use case, and so on. Technically it is possible to use ADAL JS in Cordova apps – I know of people who do it. However ADAL JS is designed to operate in a different environment, SPA apps coming from a  server, and assumes constraints that are simply not present in Cordova apps: browser sandboxing, absence of refresh token in the implicit flow, and so on. The Cordova plugin for ADAL does not have to cope with such limitations, and it grants you far more access to the advanced authentication capabilities of the devices themselves. How? Perhaps I should start from getting on the same page on what Cordova is. Quoting from its about page:

    Apache Cordova is a set of device APIs that allow a mobile app developer to access native device function such as the camera or accelerometer from JavaScript. Combined with a UI framework such as jQuery Mobile or Dojo Mobile or Sencha Touch, this allows a smartphone app to be developed with just HTML, CSS, and JavaScript. When using the Cordova APIs, an app can be built without any native code (Java, Objective-C, etc) from the app developer. Instead, web technologies are used, and they are hosted in the app itself locally (generally not on a remote http server). And because these JavaScript APIs are consistent across multiple device platforms and built on web standards, the app should be portable to other device platforms with minimal to no changes.

    That is a very, very neat trick. As our first sample shows, it is amazingly simple to whip together one app – and run it on many different platforms without a single change. Of course our sample is a toy, as samples demonstrating API usage ought to be – IRL you’d likely at least add some CSS to comply to the look & feel of the targeted platform. But even taking that into account, I am amazed by how succinct the app code turns out to be. Cordova achieves its tricks by exposing native platform capabilities by plugins: JavaScript façades which route calls to fragments of native code – native code that the plugin must supply for each of the platforms it wants to support. That is also how the ADAL plugin came to be: we decided a JavaScript API to use for exposing the most basic ADAL capabilities, then the valiant developers at MS open Tech created a bridge between that and the native ADALs on iOS, Android and .NET. (specifically, the two Windows Runtime Components in the ADAL  .NET NuGet targeting Windows Store and Windows Phone 8.1 store apps). Concretely: say that you write a Cordova app and you deploy it to an iOS device, real or emulated. When in your JavaScript you invoke one of the ADAL Cordova plugin methods, say the classic acquireTokenAsyc, what actually happens is that the parameters will be dispatched down and the logic will be executed by the Objective-C flavor of ADAL: the cached tokens will be looked up from the Keychain, for example. Take the same application, and deploy it to a Windows device: the exact same JavaScript call will end up being executed by the corresponding .winmd component, and the tokens will be looked up from the Windows Store isolated storage. None of that would be possible with ADAL JS, of course: the storage on the actual device would be completely unreachable. The same holds for any other capability you get when you use the native ADALs.

    ADALs’ Rosetta Stone

    This isn’t the first time we work on an ADAL deliverable that can target multiple targets at once: ADAL .NET 3.0 preview uses PCLs and Xamarin technology to target the same platforms discussed here. Apart from the obvious audience difference between the two libraries (one is aimed at C# developers, the other at JavaScript ones) the main characteristic that sets those apart is how deep they need to drill in the platform layers to achieve their goals. For ADAL .NET, it’s the .NET Framework itself that is now available on every platform. The differences between platforms do exist, and we do need to take them into account in our programming model, but those all still live at the .NET level: we do need to change the component that shows the web authentication experience on every platform, but on every platform there’s a .NET API for it. Those differences surface all the way to you, the application developer: your Visual Studio solution typically has projects for each platform, where you write platform specific code (though that’s still .NET). That basically means that we are only limited by what makes sense for the target platform, but as a baseline we can expose whatever is in ADAL .NET. In ADAL for Cordova things are different. The JavaScript layer is just a façade and all the hard work is delegated to actual platform bits. We can only execute on platforms where we have an ADAL flavor available. For example: Cordova can run on Ubuntu, but we don’t have an ADAL that would run natively on it. That has two main consequences:

    1. The JavaScript façade we expose must utilize features that are available on ALL of the ADAL libraries used by the plugin.
      1. Corollary: if there are differences in the way in which ADALs on different platforms handle things, the plugin should try to normalize those as much as possible
    2. If there are platform specific features that MUST be surfaced to support mandatory functionalities, they have to be done in ways that won’t interfere with the platform-neutral programming model

    That’s quite a tall order! To avoid the analysis paralysis that was very likely to ensue, we deliberately kept things very simple:

    That approach makes it possible for you to write something like

    authenticate: function (authCompletedCallback) {
          app.context = new Microsoft.ADAL.AuthenticationContext(authority);
          app.context.acquireTokenSilentAsync(resourceUri, clientId)
              .then(authCompletedCallback, function () {
                  app.context.acquireTokenAsync(resourceUri, clientId, redirectUri)
                    .then(authCompletedCallback, function (err) {
                        app.error("Failed to authenticate: " + err);
                  });
              });
          });
    },

    Which is pretty much the base of all native flows – try to get the token I need without showing any UX, and if it fails – prompt. That forced us to ensure that what we return from all libraries is consistent. That is mostly the case – the ADAL dev team makes semantic (!=syntacticSmile) consistency across platforms a priority, but there are few things here and there that for a reason or another diverge. For example, not all ADALs agree about what should be used as user ID in the AuthenticationResult: some use a human readable identifier, others do not. A more serious difference is in how platforms handle the common endpoint. The plugin normalized what was easy to normalize, but in general you can expect the consistency between native ADALs to increase with new releases. Anyhow: I personally really like the minimal interface this plugin offers. I am hoping that you guys will like it – I am all for lightweight, and if we see few important apps built on top of this doing their auth stuff just fine, we might be able to spread the approach back to other ADALs Smile

    The Plugin

    The plugin in itself has a pretty interesting architecture, dictated by how Cordova organizes things in a plugin. You don’t need to know any of the below in order to use the plugin in your app, I am reporting it just because it’s cool – and who knows. maybe I’ll entice you to contribute to it! Smile Here there’s the screenshot of the structure of the plugin repo:

    image

    The JavaScript façade is in the www folder. That’s super convenient for figuring out the development surface offered by the plugin. All the files there are artifact exposed by the OM, apart from CordovaBridge.js. That file contains the main dispatcher (executeNativeMethod) to route JS calls to their native counterparts (see this for more details). For example, if you taker a look at AuthenticitonContext.js you’ll find that a call to acquireTokenAsync actually boils down to

    bridge.executeNativeMethod('acquireTokenAsync', [this.authority, resourceUrl, clientId, redirectUrl])

    The native action is all under /src. Here, every platform is represented by a subfolder (with the exception of Windows Store and Windows Phone, which are bundled). Every platform folder follows the same logical structure.

    The Scripts folder is also interesting, but before I get in the details of it I have to mention how one actually sets up the plugin in one application. Remember, we have detailed instructions on the readmes of both the library and the sample – the below is only for explaining the plugin’s architecture. Let’s say that you wrote your JS app and you are now ready to give it a spin. Here there’s the ceremony you follow if you use the Cordova command line tools:

       1: cordova create MySample --copy-from="sample"
       2: cd MySample
       3: cordova platform add https://github.com/apache/cordova-android.git
       4: cordova platform add ios
       5: cordova platform add windows
       6: cordova plugin add android@97718a0a25ec50fedf7b023ae63bfcffbcfafb4b
       7: cordova run

    (Note that if you are on Mac you can’t run a Windows emulator, and on Windows you can’t emulate iOS) The first line takes your code and creates a new local repository based on that. It will be used to host both your code and whatever it is necessary to support the platforms you’ll choose to support. The lines 3 to 5 tell Cordova to set your sample app to include all the artifacts necessary for supporting the platforms specified. Finally – line 6 sets up the ADAL plugin in your project. That’s where the files in the /Script folder come in – they contain logic that needs to be executed as the plugin code is added to each platforms, and in some cases at app build time . For example: if in Windows you want to be able to authenticate against one ADFS in your intranet, the Windows Runtime expects lots of settings to be set; iOS requires specific entitlements for code singing; and so on. All in all, I have to say that Cordova offers one of the cleanest and easier to understand plugin structure I’ve seen. Navigating through the ADAL plugin repo is a joy, as everything is nicely readable and just makes sense. Again, you don’t need to see what’s inside the plugin to use it – but I find it interesting and instructive Smile

    Feedback!

    This is a preview, and as usual the reason we put previews out is to give you the change of giving it a spin and letting us know what you like, dislike and what does not work for you. I am personally very excited about this plugin, I just love the simplicity and power it offers – and I know that lots of you were searching for a solution for using Azure AD and call the API it protects (Office 365, Azure, Graph API, etc) from Cordova applications. Please do not hesitate to hit us with your feedback directly on github. Happy coding!!

    April 04, 2015

    Julian BondOrphan Black Series 3 is coming up with the rest of the world premiere in April 18. In the UK it used... [Technorati links]

    April 04, 2015 07:34 AM
    Orphan Black Series 3 is coming up with the rest of the world premiere in April 18. In the UK it used to be shown on BBC3, but their website (http://www.bbc.co.uk/programmes/b04210v9) says "no upcoming broadcasts" and the various commentary sites have no UK dates.

    So how do I get to watch it in the UK? And preferably without the 3 day delay BBC3 used to have so I don't get spoilers. Inevitably, the BBC America site is region locked online viewing is prevented in the UK. There are sometimes ways round that as long as you don't get too creative with things like a Chromecast.

    Why isn't BBC America available on things like Virgin Media? I guess it's the same reasons as why Sky Atlantic isn't available. ;)

    http://www.bbcamerica.com/orphan-black/
     Orphan Black »
    Official website for BBC America's series

    [from: Google+ Posts]
    April 02, 2015

    Ludovic Poitou - ForgeRockLinux AD Integration with OpenDJ – by Pieter Baele [Technorati links]

    April 02, 2015 10:26 PM

    This week I stumbled upon this presentation done by Pieter Baele, about the integration of Linux, Microsoft AD and OpenDJ, to build a secure efficient naming and security enterprise service.

    The presentation covers the different solutions to provide integrated authentication and naming services for Linux and Windows, and described more in depth one built with OpenDJ. Overall, it has very good information for the system administrators that need to address this kind of integration between the Linux and the Windows world.

    Screen Shot 2015-04-03 at 00.21.10


    Filed under: Directory Services Tagged: ActiveDirectory, directory, directory-server, floss, flossuk, integration, ldap, linux, opendj, opensource
    April 01, 2015

    KatasoftWhy HTTP is Sometimes Better than HTTPS [Technorati links]

    April 01, 2015 03:00 PM

    Gas Mask Sketch

    UPDATED April 2, 2015: This was an April Fools Joke. Read. Laugh. Learn. If you’re building web services, you should most definitely be using HTTPS.

    As a security company, we frequently get questions here at Stormpath from developers regarding security best practices. One of the most common questions we get is:

    Should I run my site over HTTPS?

    Unfortunately, regardless of where you go on the internet, you’ll mostly ready the same advice: encrypt everything!, use SSL for all sites!, etc. The reality, of course, is that this is not usually good advice.

    There are many circumstances where HTTP is better than HTTPS. HTTP is, in fact, a much better and more useful protocol than HTTPS, which is why we often recommend it to our customers. Here’s why…

    The Problems with HTTPS

    HTTPS as a protocol is riddled with problems. Numerous, well-known issues with the protocol and popular implementations make it unsuitable for a wide variety of web services.

    HTTPS is Painfully Slow

    Sloth Sketch

    One of the primary blockers for HTTPS adoption is the fact that the HTTPS protocol is painfully slow.

    By its very nature, HTTPS is meant to securely encrypt communications between two parties. This requires both parties continuously spend valuable CPU cycles:

    While this doesn’t sound like much, crypto code is very CPU intensive. It makes heavy usage of the floating point CPU registers, which taxes your CPU and slows down request processing.

    Here’s a very informative ServerFault thread showing just how big of a slowdown you can expect using a simple Ubuntu sever with Apache2: http://serverfault.com/questions/43692/how-much-of-a-performance-hit-for-https-vs-http-for-apache

    Here are the results:

    HTTPS is Slow

    Even in a very simple example like the one shown above, HTTPS can reduce the speed of your web server by more than 40 times! That’s a HUGE drag to web performance.

    In environments today, where it’s common to build your application as a composition of REST APIs — using HTTPS is a sure way to slow down your site, reduce your application’s performance, unnecessarily hammer your server CPUs, and generally annoy your users.

    For many speed sensitive applications it’s often much better to just use plain HTTP.

    HTTPS Isn’t a One-Size-Fits All Safeguard

    Darth Vader Sketch

    A lot of people are under the impression that HTTPS will make their site secure. This isn’t true.

    HTTPS only encrypts traffic between you and a server — once the HTTPS information transit has terminated, everything is fair game.

    This means that if your computer is already infected with malware, or you’ve been tricked into running some malicious software — all the HTTPS in the world won’t do anything for you.

    Furthermore, if any exploits exist on the HTTPS server, an attacker can simply wait until the HTTPS transaction has finished, then grab whatever data necessary at another layer (the web service layer, for example).

    SSL certificates themselves are also frequently abused. The way they work in web browsers, for instance, is quite error prone:

    In the above case, a popular certificate authority mistakenly signed numerous fake and fraudulent certificates, directly compromising the security of (millions?) of Mozilla users.

    While HTTP doesn’t offer encryption of any type, at least you know what you’re dealing with.

    HTTPS Traffic Can be Intercepted Easily

    If you’re building a web service that is meant to be consumed through insecure devices (like mobile apps), you might be under the impression that since your service is running over HTTPS, users are unable to intercept and read your private messages.

    If that’s what you thought, you’d be wrong.

    Users can easily setup a proxy on their computer to intercept and inspect all HTTPS traffic, thereby bypassing your very own SSL certificate checks, and allowing your private information to be directly leaked.

    This blog post walks you through intercepting and reading private HTTPS messages on mobile devices.

    Think you’re doing it right? Don’t count on it! Even large companies like Uber have had their mobile apps reverse engineered, despite their HTTPS usage. If you’re in the mood, I can’t recommend reading this article enough.

    It’s time to accept the fact that no matter what you do, attackers will be able to read your traffic in one way or another. Instead of wasting engineering time trying to fix and patch common SSL issues, spend your time working on your core product or service and just use HTTP wisely.

    HTTPS Exploits Exist

    It’s well known that HTTPS isn’t invulnerable. There have been numerous HTTPS exploits over the years:

    It’s inevitable that there will be more attacks in the future. If you pair this with the fact that the NSA is spending insane amounts of money to capture and store SSL traffic for future decryption — it seems pointless to use HTTPS considering that your private traffic will almost certainly be made public at some point in the future.

    HTTPS is Expensive

    The last main point I want to cover is that HTTPS is expensive. To purchase a certificate that browsers and web clients will recognize, you have to purchase an SSL certificate from a root certificate authority.

    This isn’t cheap.

    SSL Certificates can range from a few dollars per year to thousands — and if you’re building a distributed application that relies on multiple microservices, you’ll need more than just one.

    This can quickly add up to a lot of money, which is particularly expensive for people building smaller projects, or looking to launch a new service on a tight budget.

    Why HTTP is a Good Choice

    On the flip side, let’s stop being negative for a moment, and instead focus on the positive: what makes HTTP great. Most developers don’t appreciate its’ benefits.

    Secure in the Right Conditions

    While HTTP itself doesn’t offer any security whatsoever, by properly setting up your infrastructure and network, you can avoid almost all security issues.

    Firstly, for all internal HTTP services you might be using, ensure that your network is private and can’t be publicly sniffed for packets. This means you’ll probably want to deploy your HTTP services inside of a very secure network like Amazon’s EC2.

    By deploying public cloud servers on EC2, you’re guaranteed to have top-notch network security, to prevent any other AWS customers from sniffing your network traffic.

    Use HTTP’s Insecurity for Scaling

    Something not many people think about, when obsessing over HTTP’s lack of security and encryption is how well it scales.

    Most modern web applications scale via queueing.

    You have a web server which accepts incoming requests, then farms individual jobs out to a cluster of servers on the same network which perform more CPU and memory intensive tasks.

    To handle queueing, people typically use a system like RabbitMQ or Redis. Both are excellent choices — but what if you could get all the benefits of queueing without using any infrastructure except your network?

    With HTTP, you can!

    Here’s how it works:

    The above system works exactly like a distributed queue, is fast, efficient, and simple.

    Using HTTPS, the above scenario would be impossible, but, by using HTTP, you can dramatically speed up your applications while removing your need for infrastructure services — a big win.

    Insecure and Proud

    The last point I’d like to mention in favor of using HTTP instead of HTTPS for your next project is: insecurity.

    Yes, HTTP provides no security for your users — but is security even really necessary?

    Not only do most ISPs monitor network traffic, but it’s become quite apparent over the past couple of years that the government has been storing and decrypting network traffic for a long time.

    Worrying about using HTTPS is like putting a padlock on a fence that is 1 foot high: it’s basically impossible to secure your applications — so why bother?

    By developing services that rely on HTTP alone, you’re not giving your users a false sense of security, tricking them into thinking they are secure when, in fact, they most likely aren’t.

    By building your apps on HTTP, you’re simplifying your life, and increasing transparency with your users. Consider it!

    JUST KIDDING!! >:)

    Happy April Fools’ Day!

    I hope you didn’t really think I would recommend against using HTTPS! I want to be perfectly clear: if you’re building any sort of web application, use HTTPS!

    It doesn’t matter what sort of application or service you’re building, if it’s not using HTTPS, you are doing it wrong.

    Now, let’s talk about why HTTPS is awesome.

    HTTPS is Secure

    Bouncer Sketch

    HTTPS is a great protocol with an excellent track record. While there have been several exploits over the years, they’ve all been relatively minor, and furthermore, they’ve been patched quickly.

    And yes, while the NSA is most certainly storing SSL traffic somewhere, the odds that they’re able to decrypt even a small amount of SSL traffic is infinitely small — this would require fast, fully functional quantum computers and would cost an insane amount of money. Odds are, nothing like this exists, so you can sleep easily at night knowing that the SSL on your site is actually protecting user data in transit.

    HTTPS is Fast

    I mentioned above that “painfully slow” HTTPS was, but the truth is almost completely the opposite.

    While HTTPS certainly requires more CPU for terminating SSL connections — this processing power is negligible at best on modern computers. The odds that you’ll ever hit an SSL bottleneck are effectively 0.

    You’re far more likely to have a bottleneck with your application or web server performance.

    HTTPS is an Important Safeguard

    While HTTPS isn’t a one-size-fits-all solution to web security, without it you’re guaranteed to be insecure.

    All web security relies on you having HTTPS. If you don’t have it, then no matter how strongly you hash your passwords or how much data encryption you do, an attacker can simply monitor a client’s network connection, read their credentials — then BAM — game over.

    So — while you can’t rely on HTTPS to solve all of your security problems, you absolutely, 100% need to use it for all services you build — otherwise there’s absolutely no way to secure your application.

    Furthermore, while certificate signing is most definitely not a perfect practice, each browser vendor has pretty strict and rigorous rules for certificate authorities. It’s VERY hard to become a trusted certificate authority, and keeping yourself in good standing is equally tough.

    Mozilla (and the other vendors) do an excellent job of pruning bad root authorities, and are generally awesome stewards of internet security.

    HTTPS Traffic Interception Is Avoidable

    Earlier, I mentioned that it’s quite easy to man-in-the-middle SSL traffic by creating your own SSL certificates, trusting them, and then intercepting traffic.

    While this is most definitely possible, it’s fairly easy to prevent via SSL Certificate Pinning.

    Essentially, by following the guidelines in the article linked to above, you can force your clients to trust only a true and valid SSL certificate, effectively preventing all sorts of SSL MITM attacks before they can even start =)

    If you’re deploying an SSL service to an untrusted location (like a mobile or desktop app), you should most definitely look into using SSL Certificate Pinning.

    HTTPS Isn’t Expensive (anymore)

    While it’s true that historically, HTTPS has been expensive — this is no longer the case. You can currently purchase very cheap SSL certificates from a number of web hosts.

    Furthermore, the EFF (Electronic Frontier Foundation) is just about to launch a completely free SSL certificate provider: https://letsencrypt.org/

    It’s launching in 2015, and will invariably change the game for all web developers. Once Let’s Encrypt goes live, you’ll be able to encrypt 100% of your sites and services for no cost at all.

    Be sure to check out their site and subscribe for updates!

    HTTP Isn’t Secure on Private Networks

    Earlier, when I talked about how HTTP security doesn’t matter, especially if your network is locked down — I was lying to you.

    While network security matters, so does transit encryption!

    If an attacker is able to gain access to any of your internal services all HTTP traffic will be intercepted and read, regardless of how ‘secure’ your network may be. This is a very bad thing.

    This is why HTTPS is critically important on both public AND private networks.

    BONUS: If you’re deploying services on AWS, DON’T COUNT ON YOUR NETWORK TRAFFIC BEING PRIVATE! AWS networks are PUBLIC, meaning that other AWS customers can potentially sniff your private network traffic — be very careful.

    HTTP and Queueing

    When I mentioned earlier how you could replace queuing infrastructure with HTTP — I wasn’t really wrong, but OH MAN. What a horrible idea!

    Relying on poor security practices to “scale” your service is a bad, horrible, awful, VERY BAD idea.

    Please don’t do it (unless it’s a proof-of-concept, in which case it’d make for a very cool demo to say the least)!

    Summary

    If you’re building web services, you should most definitely be using HTTPS.

    It’s easy, cheap and builds user trust, so there’s no excuse not to. As developers, it’s our job to help protect user security — and one of the best ways to do that is to force HTTPS side-wide.

    I hope you enjoyed this article, and got a good laugh or two.

    If you liked this article, you might also like last year’s Apil Fools’ post as well: Why You Might Want to Store Your Passwords in Plain Text.

    Ludovic Poitou - ForgeRockRencontrez ForgeRock à SIdO Lyon, les 7 et 8 Avril [Technorati links]

    April 01, 2015 08:35 AM

    Salon Internet des ObjetsJe serai présent avec notre équipe au SIdO, l’événement 100% dédié à l’Internet des Objets qui aura lieu à Lyon les 7 et 8 Avril 2015.

    Outre notre présence dans l’espace coworking pendant les 2 jours, Lasse Andresen, CTO de ForgeRock, animera un workshop avec ARM et Schneider sur la place de l’Identité dans l’Internet des Objets, le Mercredi 8 à 13h30.

    N’hésitez pas à venir nous rendre visite dans l’espace coworking.


    Filed under: General, InFrench Tagged: conference, ForgeRock, france, identity, internet-of-things, iot, Lyon, privacy, security

    Ludovic Poitou - ForgeRockJoin us for The Identity Summit [Technorati links]

    April 01, 2015 07:23 AM

    Meet the Security and Identity rockstars and thought leaders at The Identity Summit, May 27-29th 2015 !

    In addition to the two full days of sessions, this year at The Identity Summit, all ForgeRock customers are invited to participate in a pre-event community day where you will be able to interact with ForgeRock product development and other customers.

    Photo by Anthony Quintano - https://www.flickr.com/photos/quintanomedia/

    Photo by Anthony Quintano – https://www.flickr.com/photos/quintanomedia/

    The event will take place at the Ritz in Half Moon Bay, California.

    Register today. Sign-up for the customer user group is part of The Identity Summit registration process. Make sure to add the Customer User Group as an “Additional Item” before submitting your information.

    The call for speakers is opened until April 13th.


    Filed under: General, Identity Tagged: California, conference, ForgeRock, identity, IRM, summit, user-group
    March 30, 2015

    GluuGluu Server Training in San Francisco, CA [Technorati links]

    March 30, 2015 09:18 PM

    gluu-jedi-training

    After RSA Security Conference on Wednesday, April 22, join Gluu CEO Mike Schwartz at WeWork SOMA for a hands on training session exploring how to use the Gluu Server to secure web and mobile applications.

    This workshop will cover how to deploy the Gluu Server on a fresh VM, how to configure single sign-on (SSO) to a SAML and OpenID Connect protected application, and how to use UMA, a new profile of OAuth2, to enforce access management policies to digital resources.

    Space is limited. First come first serve!

    RSVP HERE!

    Use the password gluuserver to gain access to the registration page.

    For a full list of upcoming Gluu events, check our website.

    About Gluu:
    Gluu publishes free open source Internet security software that universities, government agencies and companies can use to enable Web and mobile applications to securely identify a person, and manage what information they are allowed to access. Using a Gluu Server, organizations can centralize their authentication and authorization service and leverage standards such as OpenID Connect, UMA, and SAML 2.0 to enable federated single sign-on (SSO) and trust elevation.

    IS4UFIM2010: GUI for configuring your scheduler [Technorati links]

    March 30, 2015 04:25 PM

    Intro

    I described in previous posts how I developed a windows service to schedule FIM. The configuration of this scheduler consists of XML files. Because it is not straightforward to ensure you have a consistent configuration that satisfies your needs, I developed an interface to help with the configuration. The tool itself is built using the WPF framework (.NET 4.5) and has following requirements:
    gui_browseNote that it is possible to use the tool on any server or workstation. After saving your changes you can transfer the configuration files to your FIM server.

    Configure triggers

    The first tab, job configuration, allows you to add, delete, rename and configure triggers. Each trigger specifies a run configuration and on which schedule this run configuration will be fired. A typical delta schedule for FIM is "each 10 minutes during working hours". This can be translated to the cron expression "0 0/10 8-18 * * ?". The drop down list with run configurations is automatically propagated based on the existing run configurations. Save config performs a validation of the cron expression using the job_scheduling_data_2_0.xsd schema file. If valid, JobConfiguration.xml is saved. A backup of the previous configuration is saved as well. Reset config reloads the interface using the configuration in the file on disk.

    Configure global parameters

    The second tab, run configuration, offers three tabs to configure the RunConfiguration.xml file. The first of these tabs, global config, allows you to configure some global parameters:
    Save config performs a validation using the RunSchedulingData.xsd schema file. If valid, RunConfiguration.xml is saved. This includes the settings of all three tabs under run configuration. A backup of the previous configuration is saved as well. Reset config reloads the interface using the configuration in the file on disk. gui_params

    Configure run configurations

    The run configurations tab allows you to add, delete, rename and edit run configurations. Editing run configurations comes down to two things:

    Default run profile

    The default run profile is the top most action. Steps that do not have an action defined, take the action defined by their parent. This mechanism allows you to reuse sequences in combination with different profiles. You could have a run profile with default run profile "Full import full sync" and another with "Delta import delta sync". Then both of them could use the same sequence resulting in different actions. This mechanism only works if using a naming convention for run profiles in all connectors in the FIM Synchronization Engine. Run profile names are case sensitive. If the scheduler tries to start a run profile that does not exist, the management agent will not be run. In the example here, the sequence Default will be run with run profile "Delta import delta sync". gui_runconfig

    Steps

    The add step button opens a new dialog where you can select the type of step. Because the server export info is read, a list of possible actions is available. However, as explained above, you do not need to specify an action. gui_addStepgui_addStepMa

    Configure sequences

    The sequences tab allows you to add, delete, rename and edit sequences. The functionality provided here is identical to the one on the run configurations tab. Whether the sequence is executed as a linear or parallel sequence is defined by the step that calls the sequence, so a sequence can be defined as linear in one run configuration (or other sequence) and as parallel somewhere else. gui_sequence

    Download

    You can find the new release of the IS4U FIM scheduler on GitHub: FIM-Scheduler Release. The setup that installs the scheduler on the FIM server now also includes the GUI tool to configure it.

    Axel NennkerNew Firefox Add-On: QRCode Login [Technorati links]

    March 30, 2015 01:52 PM
    Current login mechanisms suffer from missing support by browsers and sites.
    Browsers offer in-browser password storage but that's about that.
    Standardized authentication methods like HTTP Digest Authentication and HTTP Basic Authentication were never really accepted by commercially successful sites. They work but the user experience is bad especially if the user does not have an account yet.

    So most sites are left with form-based authentication were the site has full control over the UI and UX. Sadly the browser has little to offer here to help the site or the user other then trying to identify signup and login forms through crude guesses based on password field existence.

    There is no standardized way for sites and browsers to work together.
    Here is a list of attempts to solve some of the above issues:
    Federations have their drawbacks too. Even Facebook login went dark for 4h a while ago which left sites depending on Facebook without user login.

    In general there is this chicken-egg problem:
    Why should sites support new-mechanism-foo when there is no browser support.
    Why should browsers support new-mechanism-foo when there are no sites using it.

    Then there are password stores. I use passwordsafe to store my password in one place. If I do not have access to that place (PC) then I can't login. Bummer.
    Others use stores hosted on the Internet and those usually support most browsers and OSses through plugin/addons and non standard trickery.
    I never could convince myself to trust the providers.

    So. Drum-roll.
    I started to work on a mechanism that has a password store on the mobile which allows you to login on your PC using your PC's camera.

    The user story is as follows:
    1. browse to a site's login page e.g. https://github.com/login
    2. have my Firefox addon installed
      https://github.com/AxelNennker/qrcodelogin
    3. click on the addon's icon
    4. present your credential-qrcode to the PC's camera
    5. be logged in
    Here is an example qrcode containing the credentials as a JSON array
    ["axel@nennker.de","password"]:

      The qrcode could be printed on paper or generated by your password store on your mobile. To help the user with the selection of the matching credentials the addon presents a request-qrcode to be read by the mobile first. This way the mobile ID-client can select the matching credentials.
      (If you don't like to install addons to test this and for a super quick demo of the qrcode reading using your webcam please to to http://axel.nennker.de/gum.html and scan a code)

      What are the benefits?
      What are the drawbacks?

      Screenshots:

      Login page at githup with addon installed:


      Screen after pressing the addon's toolbar icon. The qrcode helps the mobile ID-client to find the matching credentials:
      Screen showing the camera picture which is scanned for qrcodes:
      This is clearly only a first step but I believe that it has potential to be a true user-centric solution that helps me and you to handle the password mess.









      March 28, 2015

      Bill Nelson - Easy IdentityOpenDJ and the Fine Art of Impersonation [Technorati links]

      March 28, 2015 01:55 PM

       

      Directory servers are often used in multi-tier applications to store user profiles, preferences, or other information useful to the application.  Oftentimes the web application includes an administrative console to assist in the management of that data; allowing operations such as user creation or password reset.  Multi-tier environments pose a challenge, however, as it is difficult to determine the identity of the user that actually performed the operation as opposed to the user that simply showed up in the log file(s).

      Consider the relationship between the user logging in to the web application and the interaction between the web application and a directory server such as OpenDJ.

       

      multi-Tier

       

      There are two general approaches that many web applications follow when performing actions against the directory server; I will refer to these as Application Access and User Access.  In both scenarios, the user must first log in to the web application.  Their credentials may be validated directly against the directory server (using local authentication) or they may be accessing the web application using single sign-on.  In either pattern, the user must first prove his identity to the web application before they are allowed to perform administrative tasks.  The differences become apparent post authentication and can be found in the manner in which the web application integrates with the directory server to perform subsequent administrative tasks.

       

      Note:  The following assumes that you are already familiar with OpenDJ access control.  If this is not the case, then it is highly advisable that you review the following:  OpenDJ Access Control Explained.

       

      Approach 1:  Application Access

       

      In the case of the Application Access approach all operations against the directory server are performed as an application owner account configured in the directory server.  This account typically has a superset of privileges required by all Web Application administrators in order to perform the tasks required of those users.  In this scenario, the Web Application binds to the directory server using its Web Application service account and performs the operation.  A quick look in the directory server log files demonstrates that all operations coming from the Web Application are performed by the service account and not the user who logged in to the Web Application.

       

      [27/Mar/2015:16:37:40 +0000] BIND REQ conn=2053 op=0 msgID=1 version=3 type=SIMPLE dn=”uid=WebApp1,ou=AppAccounts,dc=example,dc=com

      [27/Mar/2015:16:37:40 +0000] BIND RES conn=2053 op=0 msgID=1 result=0 authDN=”uid=WebApp1,ou=AppAccounts,dc=example,dc=com” etime=1

      [27/Mar/2015:16:37:40 +0000] SEARCH REQ conn=2053 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:16:37:40 +0000] SEARCH RES conn=2053 op=1 msgID=2 result=0 nentries=69 etime=2

       

      While easiest to configure, one drawback to this approach is that you need to reconcile the directory server log files with the Web Application log files in order to determine the identity of the user performing the action.  This makes debugging more difficult.  Not all administrators have the same access rights; so another problem with this approach is that entitlements must be maintained and/or recognized in the Web Application and associated with Web Application users.  This increases complexity in the Web Application as those relationships must be maintained in yet another database.  Finally, some security officers may find this approach to be insecure as the entry appearing in the log files is not indicative of the user performing the actual operation.

       

      Approach 2:  User Access

       

      The User Access approach is an alternative where the Web Application impersonates the user when performing operations.  Instead of the Web Application binding with a general service account, it takes the credentials provided by the user, crafts a user-specific distinguished name, and then binds to the directory server with those credentials.  This approach allows you to manage access control in the directory server and the logs reflect the identity of the user that performed the operation.

       

      [27/Mar/2015:17:01:01 +0000] BIND REQ conn=2059 op=0 msgID=1 version=3 type=SIMPLE dn=”uid=bnelson,ou=Administators,dc=example,dc=com

      [27/Mar/2015:17:01:01 +0000] BIND RES conn=2059 op=0 msgID=1 result=0 authDN=” uid=bnelson,ou=Administators,dc=example,dc=com ” etime=1

      [27/Mar/2015:17:40:40 +0000] SEARCH REQ conn=2059 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:17:40:40 +0000] SEARCH RES conn=2059 op=1 msgID=2 result=0 nentries=69 etime=2

       

      A benefit to this approach is that entitlements can be maintained in the directory server, itself.  This reduces the complexity of the application, but requires that you configure appropriate access controls for each user.  This can easily be performed at the group level, however, and even dynamically configured based on user attributes.  A drawback to this approach is that the Web Application is acting as if they are the user – which they are not.  The Browser is essentially the user and the Browser is not connecting directly to the directory server.  So while the log files may reflect the user, they are somewhat misleading as the connection will always be from the Web Application.  The other problem with this approach is the user’s credentials must be cached within the Web Application in order to perform subsequent operations against the directory server.  One could argue that you could simply keep the connection between the Web Application and the directory server open, and that is certainly an option, but you would need to keep it open for the user’s entire session to prevent them from having to re-authenticate.  This could lead to performance problems if you have extended session durations, a large number of administrative users, or a number of concurrent sessions by each administrative user.

       

      Proxy Control – The Hybrid Approach

       

      There are both benefits and drawbacks to each of the previously mentioned approaches, but I would like to offer up an alternative proxy-based approach that is essentially a hybrid between the two.  RFC 4370 defines a proxied authorization control (2.16.840.1.113730.3.4.18) that allows a client (i.e. the Web Application) to request the directory server (i.e. OpenDJ) to perform an operation not based on the access control granted to the client, but based on another identity (i.e. the person logging in to the Web Application).

      The proxied authorization control requires a client to bind to the directory server as themselves, but it allows them to impersonate another entry for a specific operation.  This control can be used in situations where the application is trusted, but they need to perform operations on behalf of different users.  The fact that the client is binding to the directory server eliminates the need to cache the user’s credentials (or re-authenticate for each operation).  The fact that access is being determined based on that of the impersonated user means that you can centralize entitlements in the directory server and grant access based on security groups.  This is essentially the best of both worlds and keeps a smile on the face of your security officer (as if that were possible).

      So how do you configure proxy authorization?  I am glad you asked.

       

      Configuring Proxied Access

       

      Before configuring proxied access, let’s return to the example of performing a search based on Application Access.  The following is an example of a command line search that can be used to retrieve information from an OpenDJ server.  The search operation uses the bindDN and password of the WebApp1 service account.

       

      ./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com ” -w password -b “ou=People,dc=example,dc=com” “l=Tampa”

       

      The response to this search would include all entries that matched the filter (l=Tampa) beneath the container (ou=People).  My directory server has been configured with 69 entries that match this search and as such, the OpenDJ access log would contain the following entries:

       

      [27/Mar/2015:16:37:40 +0000] SEARCH REQ conn=2053 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:16:37:40 +0000] SEARCH RES conn=2053 op=1 msgID=2 result=0 nentries=69 etime=2

       

      As previously mentioned, these are the results you would expect to see if the search was performed as the WebApp1 user.  So how can you perform a search impersonating another user?  The answer lies in the parameters used in the search operation.  The LDAP API supports a proxied search, you just need to determine how to access this functionality in your own LDAP client.

       

      Note: I am using ldapsearch as the LDAP client for demonstration purposes.  This is a command line tool that is included with the OpenDJ distribution.  If you are developing a web application to act as the LDAP client, then you would need to determine how to access this functionality within your own development framework.

       

      The OpenDJ search command includes a parameter that allows you to use the proxy authorization control.   Type ./ldapsearch –help to see the options for the ldapsearch command and look for the -Y or –proxyAs parameter as follows.

       

      proxyAs

       

      Now perform the search again, but this time include the proxy control (without making any changes to the OpenDJ server).  You will be binding as the WebApp1 account, but using the -Y option to instruct OpenDJ to evaluate ACIs based on the following user:  uid=bnelson,ou=People,dc=example,dc=com.

       

      ./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

       

      You should see the following response:

       

      SEARCH operation failed

      Result Code:  123 (Authorization Denied)

      Additional Information:  You do not have sufficient privileges to use the proxied authorization control  The request control with Object Identifier (OID) “2.16.840.1.113730.3.4.18” cannot be used due to insufficient access rights

       

      The corresponding entries in OpenDJ’s access log would be as follows:

       

      [27/Mar/2015:10:47:18 +0000] SEARCH REQ conn=787094 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:10:47:18 +0000] SEARCH RES conn=787094 op=1 msgID=2 result=123 message=”You do not have sufficient privileges to use the proxied authorization control  You do not have sufficient privileges to use the proxied authorization control” nentries=0 etime=1

       

      The key phrase in these messages is the following:

       

      You do not have sufficient privileges to use the proxied authorization control

       

      The key word in that phrase is “privileges” as highlighted above; the WebApp1 service account does not have the appropriate privileges to perform a proxied search and as such, the search operation is rejected.  The first step in configuring proxied access control is to grant proxy privileges to the Application Account.

       

      Step 1:  Grant Proxy Privileges to the Application Account

       

      The first step in allowing the WebApp1 service account to perform a proxied search is to give that account the proxied-auth privilege.  You can use the ldapmodify utility to perform this action as follows:

       

       ./ldapmodify -D “cn=Directory Manager” -w password

      dn: uid=WebApp1,ou=AppAccounts,dc=example,dc=com

      changetype: modify

      add: ds-privilege-name

      ds-privilege-name: proxied-auth

      Processing MODIFY request for uid=WebApp1,ou=AppAccounts,dc=example,dc=com

      MODIFY operation successful for DN uid=WebApp1,ou=AppAccounts,dc=example,dc=com

       

      Now repeat the proxied search operation.

       

      ./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

       

      Once again your search will fail, but this time it is for a different reason.

       

      SEARCH operation failed

      Result Code:  12 (Unavailable Critical Extension)

      Additional Information:  The request control with Object Identifier (OID) “2.16.840.1.113730.3.4.18” cannot be used due to insufficient access rights

       

      The corresponding entries in OpenDJ’s access log would be as follows:

       

      [27/Mar/2015:11:39:17 +0000] SEARCH REQ conn=770 op=1 msgID=2 base=” ou=People,dc=example,dc=com ” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:11:39:17 +0000] SEARCH RES conn=770 op=1 msgID=2 result=12 message=”” nentries=0 authzDN=”uid=bnelson,ou=People,dc=example,dc=com” etime=3

       

      As discussed in OpenDJ Access Control Explained, authorization to perform certain actions may consist of a combination of privileges and ACIs.  You have granted the proxied-auth privilege to the WebApp1 service account, but it still needs an ACI to allow it to perform proxy-based operations.  For the purposes of this demonstration, we will use the following ACI to grant this permission.

       

      (targetattr=”*”) (version 3.0; acl “Allow Proxy Authorization to Web App 1 Service Account”; allow (proxy) userdn=”ldap:///uid=WebApp1,ou=AppAccounts,dc=example,dc=com”;)

       

      This ACI will be placed at the root suffix for ease of use, but you should consider limiting the scope of the ACI by placing it at the appropriate branch in your directory tree (and limiting the targetattr values).

       

      Step 2:  Create a (Proxy) ACI for the Application Account

       

      Once again, you can use the ldapmodify utility to update OpenDJ with this new ACI.

       

      ./ldapmodify -D “cn=Directory Manager” -w password

      dn: dc=example,dc=com

      changetype: modify

      add: aci

      aci: (targetattr=”*”) (version 3.0; acl “Allow Proxy Authorization to Web App 1 Service Account”; allow (proxy) userdn=”ldap:///uid=WebApp1,ou=AppAccounts,dc=example,dc=com”;)

      Processing MODIFY request for dc=example,dc=com

      MODIFY operation successful for DN dc=example,dc=com

       

      Now repeat the proxied search a final time.

       

      ./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

       

      This time you should see the results of the search performed correctly.  But how do you know that this was a proxied search and not simply one performed by the WebApp1 as before?  The clue is once again in the OpenDJ access log file.  Looking in this file, you will see the following entries:

       

      [27/Mar/2015:11:40:23 +0000] SEARCH REQ conn=797 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

      [27/Mar/2015:11:40:23 +0000] SEARCH RES conn=797 op=1 msgID=2 result=12 message=”” nentries=69 authzDN=”uid=bnelson,ou=people,dc=example,dc=com” etime=1

       

      The authzDN value contains the DN of the entry used for authorization purposes.  This is a clear indicator that access control was based on the uid=bnelson entry and not uid=WebApp1.

      Still not convinced?  You can verify this by removing the rights for the uid=bnelson entry and running your search again.  Add the following ACI to the top of your tree.

       

      (targetattr=”*”)(version 3.0;acl ” Deny Access to BNELSON”; deny (all)(userdn = “ldap:///uid=bnelson,out=people,dc=example,dc=com”);)

       

      Now run the search again.  This time, you will not see any errors, but you will also not see any entries returned.  While you are binding as the WebApp1 service account, for all intents and purposes, you are impersonating the uid=bnelson user when determining access rights.

       

      Summary of Steps

       

      The following steps should be performed when configuring OpenDJ for proxied access control.

      Create the Application Account in OpenDJ (i.e. WebApp1)

      1. Create the Application Account in OpenDJ (i.e. WebApp1)
      2. Add the proxy-auth privilege to the Application Account
      3. Create an ACI allowing the Application Account to perform proxy operations
      4. Create a User Account in OpenDJ (i.e. bnelson)
      5. Configure ACIs for User Account as appropriate
      6. Test the configuration by performing a command line search using the proxied access control parameter.

      March 27, 2015

      Julian BondAvast thah, me hearties! [Technorati links]

      March 27, 2015 09:22 AM
      Avast thah, me hearties!

      Google have a new auto-proxy service that speeds up unencrypted web pages by compressing them on Google's proxy servers between the website and your device.
      https://support.google.com/chrome/answer/2392284?p=data_saver_on&rd=1

      This has an unintended but hilarious side effect. There's a bunch of websites that are blocked by UK ISPs for copyright issues. So if you go to, for instance, http://newalbumreleases.net/ you will normally be blocked by a Virgin Media/BT/TalkTalk warning message. But if you have Google's data saving Chrome extension installed it acts like a VPN and side steps the block.

      Then there's https://thepiratebay.se/ They've sucessfully implemented an https:// scheme that also side steps the same UK ISP block.

      For the moment, we're saved. But stand by to repel boarders!
       Reduce data usage with Chrome’s Data Saver - Chrome Help »

      [from: Google+ Posts]
      March 26, 2015

      Paul MadsenNAPPS - a rainbow of flavours [Technorati links]

      March 26, 2015 08:24 PM
      Below is an arguably unnecessarily vibrant swimlane of the proposed (Native Appplications) NAPPS  flow for an enterprise built native application calling an on-prem API.

      The very bottom arrow of the flow (that from Ent_App to Ent_RS) is the actual API call that, if successful will return the business data back to the native app. That call is what we are trying to enable (with all the rainbow hued exchanges above)

      As per normal OAuth, the native application authenticates to the RS/API by including an access token (AT). Also show is the possibility of the native application demonstrating proof of possession for that token but I'll not touch on that here other than to say the corresponding spec work is underway).

      What differs in a NAPPS flow is how the native application obtains that access token. Rather than the app itself taking the user through an authentication & authorization flow (typically via the system browser), the app gets its access token via the efforts of an on-device 'Token Agent' (TA). 

      Rather than requesting an access token of a network Authorization Service (as in OAuth or Connect), the app logically makes its request of the TA - as labelled below as 'code Request + PKSE'. Upon receiving such a request from an app, the TA will endeavour to obtain from the Ent_AS an access token for the native app. This step is shown in green below. The TA uses a token it had previously obtained from the AS in order to obtain a new token for the app. 

      In fact, what the TA obtains is not the access token itself, but an identity token (as defined by Connect) that can be exchanged by the app for the more fundamental access token - as shown in pink below. While this may seem like an unnecessary step, it actually

      1. mirrors how normal OAuth works, in which the native app obtains an authz code and then exchanges that for the access token (this having some desirable security characteristics)
      2. allows the same pattern to be used for a SaaS app, ie one whether there is another AS in the mix and we need a means to federate identities across the policy domains. 




      When I previously wrote 'TA uses a token it had previously obtained from the AS', I was referring to the flow coloured in light blue above. This is a pretty generic OAuth flow , the only novelty is the introduction of the PKSE mechanism to protect against a malicious app stealing tokens by sitting on the app's custom URL scheme.


      Kantara InitiativeUMA V1.0 Approved as Kantara Recommendation [Technorati links]

      March 26, 2015 05:36 PM

      Congratulations to the UMA Work Group on this milestone!

      The User-Managed Access (UMA) Version 1.0 specifications have been finalized as Kantara Initiative Recommendations, the highest level of technical standardization Kantara Initiative can award. UMA has been developed over the last several years by industry leaders in our UMA Work Group.

      The main spec is officially known as User-Managed Access (UMA) Profile of OAuth 2.0 but is colloquially known as UMA Core. UMA Core defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policies.

      UMA Core calls several other specs by reference, but only one referenced spec is currently a product of the UMA WG. Officially known as OAuth 2.0 Resource Set Registration but colloquially known as RSR, this spec defines a resource set registration mechanism between an OAuth 2.0 authorization server and resource server. The resource server registers information about the semantics and discovery properties of its resources with the authorization server. The RSR mechanism is useful not just for UMA, but also potentially for OpenID Connect and plain OAuth use cases as well.

      March 25, 2015

      GluuUMA 1.0 Approved by Unanimous Vote! [Technorati links]

      March 25, 2015 06:08 PM

      yay-penguins

      This week voting member organizations at the Kantara Initiative unanimously approved the User Managed Access (UMA) 1.0 specification, a new standard profile of OAuth2 for delegated web authorization. More than half of the member organizations were accounted for on the vote to reach quorum and provide the support needed for approval.

      The unanimous approval of UMA 1.0 marks a major milestone in the advancement and adoption of open web standards for security. In conjunction with OAuth2 and OpenID Connect, UMA provides an open and inter-operable foundation for web, mobile, and IoT security that has until now only been possible to achieve through proprietary vendor APIs.

      UMA offers individuals and organizations an unprecedented level of control over data sharing and resource access, and the unanimous approval of the specification signifies that UMA is well positioned for large scale adoption on the Internet.

      UMA enables web and API servers to delegate policy evaluation to a central policy decision point. An UMA authorization server can evaluate any policies to determine whether to grant access to an API to a certain client or person. The type of authentication used by the person, the geolocation of the request, the time of day, and the score of a fraud detection algorithm are all examples of data that can be considered before access is granted.

      A centralized UMA authorization server (like the Gluu Server) can leverage OpenID Connect for client and person authentication. UMA is in fact complimentary to OpenID Connect, and enables a feature known as “trust elevation” or “stepped-up” authentication.

      The Gluu Server will be updated to support UMA 1.0 in release 2.2, expected in time for the RSA Security Conference when the UMA standard is finalized.

      For more information and a list of UMA implementations, visit the Kantara UMA page.

      Ben Laurie - Apache / The Bunkercheap jordans for sale 1jS5 2015325 [Technorati links]

      March 25, 2015 04:42 PM

      heart, everyone knows that, today, things can not be good,cheap jordans for sale, I am afraid of. Soul jade face,cheap Authentic jordans, when Xiao Yan threw three words, and finally is completely chill down, he stared at the latter, after a moment, slowly nodded his head and said: ‘So it would be only First you can kill ah. ‘
      ‘Boom’
      accompanied soul jade pronunciation last fall, as well as the soul of the family all day demon Phoenix family strong, almost invariably, the body of a grudge unreserved broke out, stature flash, that is, people will go far Hsiao round siege.
      soul to see family and demon days while Phoenix family hands, smoked child, who is gradually cold cheek,Cheap Jordans, a step forward,cheap jordans, the body of a grudge, running into the sky.
      ‘soul jade, you really want to cause war between ancient tribe of ethnic fragmentation and soul?’ Gu Qingyang cold shouted.
      ‘Hey’ war? I am the soul of the family, may have never been afraid of you ancient tribe, so you tranquility so long, it is only to give you?Fills a little more time, I really think you can not move the soul of family fragmentation? ‘Heard that the soul is starting jade face is a blur shadow smile’ immediately turned to awe-inspiring Xiao Yan, said: ‘You are the most recent name first,jordan shoes for sale, in my soul, but does not carry a small family, even the four will always revere missed a while back, when I progenitor is said to be hands on early hands-on, but why those old guys seem very concerned about, and that makes you have to live up to now, but I think this should also coming to an end. ‘
      voice down, rich black vindictive,Coach Outlet, self-soul jade suddenly overwhelming storm surge out of the body, a Unit of cold wave, since the body constantly open to diffuse.
      feel the TV drama filled body and soul jade majestic open fluctuations on smoked children, who also appeared on the cheek dignified

      Ben Laurie - Apache / The BunkerCheap Jordan Shoes 1fF6 2015325 [Technorati links]

      March 25, 2015 04:41 PM

      body paint on the ground slippery ten meters, just stop, just stop its stature, two He is rushed to the guard house,Cheap Jordan Shoes, grabbed him, severely The throw back.
      ‘give you a chance to say,cheap jordans for sale, I can let you go.’ He and everyone on the main palm Xiupao swabbing a bit faint.
      ‘I’ve said, this is my income in among the mountains of Warcraft.’ Card Gang pale, mouth blood constantly emerge, his body lying on the ground, raised his head, staring eyes tightly He Lord every family , tough road.
      swabbing hands slowly stopped, gradually being replaced by a ghastly He and everyone on the main surface, slowly down the steps, after a moment, come to the front of the card post, indifferent eyes looked moribund post cards, mouth emerged grinning touch,luckythechildrensbook.com, soon feet aloft,Cheap Jordans Outlet, then the head is facing the harsh post card stamp down, watch the momentum, if it was stepped on, I am afraid that the post of head of the card will be landing in Lima as burst open like a watermelon .
      looking at this scene, on the square suddenly screams rang out round after round.
      heard screams around, that He even more ferocious mouth the Lord every family, but on his head away from the post card only Cunxu distance,Coach Outlet Store, a dull sound, but it is quietly sounded at its feet on the square, and its feet, is at this moment suddenly solidified.
      ‘This foot down, you will you use your head to replace it,’ seven hundred and nineteenth chapter helping hand
      seven hundred and nineteenth chapter assistance
      slowly dull sound echoed on the training field. So that was all screams are down at the moment solidified,Cheap Jordan Shoes, all everyone is looking around, eyes filled with all kinds of emotions.
      in that voice sounded grabbing, He is the Lord every family looking slightly changed, in other words so that was his share of rude Lengheng cry, but it is the foot

      Ben Laurie - Apache / The Bunkercheap jordans for sale 2dW1 2015325 [Technorati links]

      March 25, 2015 04:40 PM

      obtaining the body shocked,cheap jordans for sale, arm place, it is faintly heard between Ma Xia Bi feeling.
      ‘Damn, this puppet refining what is? physical force actually so horrible!’ arm uploaded to feel pain, Shen Yun also could not help but thrown touch the hearts of dismay. ‘Xiao Yan,Cheap Retro Air Jordan, brisk walking, do not delay, and then later on too late!’ Xiao Yan demon puppet manipulated from time to speed up the attack, Han pool that urgent voice came again quietly.
      hear Korean pool reminder, Xiao Yan is slightly shook his head, he could feel that he has been one filled with awe-inspiring atmosphere intended to kill the lock, even now turned around and left, will be caught up quickly.
      mans eye blinking, Xiao Yan palm grip suddenly facing a giant pit,www.lindsaywalden.com, a certain attraction storm surge, flood is established directly pick up the body, and finally grabbed,jordan shoes for sale, slightly spy, immediately sneer a cry, and said: ‘You Hard life touches so also can not kill you, but Ye Hao. ” coma Hung Li Xiao Yan said seems to hear, eyes and shook it, you want to open, but a serious injury, but it is so He eventually obtained a waiver can only be futile.
      when Xiao Yan Li Hung caught into the hands of a sharp breaking wind sound, suddenly resounded in the sky, and soon a vague figure, like lightning, facing day to swallow Taiwan crazy shot, that kind of diffuse from the body and out of the dark intention to kill, even all the way across, are able to feel it. ‘Boy, put Hong Li, otherwise die!’
      far, that silhouette is seen Xiao Yan hands of the person arrested, the moment an angry roar, came crashing again.
      heard this roar, it is struggling to support Shen Yun is a happy heart,Coach Factory Outlet, Yipiao corner of my eye, and she is seen Tian Xiao Hong figure,Coach Outlet Store, the moment anxious shouted: ‘Hung old guy, you flood the home of the people, all dead In

      Christopher Allen - Alacrity10 Design Principles for Governing the Commons [Technorati links]

      March 25, 2015 03:55 AM

      Resource SharingIn 2009, Elinor Ostrom received the Nobel Prize in Economics for her “analysis of economic governance, especially the commons.

      Since then I've seen a number of different versions of her list of the 8 principles for effectively managing against the tragedy of the commons. However, I've found her original words — as well as many adaptions I've seen since — to be not very accessible. Also, since the original release of the list of 8 principles there has been some research resulting in updates and clarifications to her original list.

      This last weekend, at two very different events — one on the future of working and the other on the future of the block chain (used by technologies like bitcoin) — I wanted to share these principles. However, I was unable to effectively articulate them.

      So I decided to take an afternoon to re-read the original, as well as more contemporary adaptions, to see if I could summarize this into a list of effective design principles. I also wanted to generalize them for broader use as I often apply them to everything from how to manage an online community to how a business should function with competitors.

      I ended up with 10 principles, each beginning with a verb-oriented commandment, followed by the design principle itself.

      1. DEFINE BOUNDARIES: There are clearly defined boundaries around the common resources of a system from the larger environment.
      2. DEFINE LEGITIMATE USERS: There is a clearly defined community of legitimate users of those resources.
      3. ADAPT LOCALLY: Rules for use of resources are adapted to local needs and conditions.
      4. DECIDE INCLUSIVELY: Those using resources are included in decision making.
      5. MONITOR EFFECTIVELY: There exists effective monitoring of the system by accountable monitors.
      6. SHARE KNOWLEDGE: All parties share knowledge of local conditions of the system.
      7. HOLD ACCOUNTABLE: Have graduated sanctions for those who violate community rules.
      8. OFFER MEDIATION: Offer cheap and easy access to confict resolution.
      9. GOVERN LOCALLY: Community self-determination is recognized by higher-level authorities.
      10. DON'T EXTERNALIZE COSTS: Resource systems embedded in other resource systems are organized in and accountable to multiple layers of nested communities.

      I welcome your thoughts on ways to improve on this summarized list. In particular, in #10 I'd like to find a better way to express its complexity (the original is even more obtuse).

      March 24, 2015

      Kantara InitiativeKantara Initiative grants Scott S. Perry CPA, PLLC Accredited Assessor Trustmark at Assurance Levels 1, 2, 3 and 4 [Technorati links]

      March 24, 2015 06:20 PM

      PISCATAWAY, NJ– (24 March, 2015) – Kantara Initiative is proud to announce that Scott S. Perry CPA, PLLC is now a Kantara-Accredited Assessor with the ability to perform Kantara Service Assessments at Assurance Levels 1, 2, 3 and 4. Scott S. Perry CPA, PLLC is approved to perform Kantara Assessments in the jurisdictions of USA, Canada and Worldwide.

      Joni Brennan, Kantara Executive Director said, “Kantara Initiative is dedicated to enabling verified trust in identity services via our Credential Service Provider Approval Program. We are pleased to welcome Scott S. Perry CPA, PLLC as a new Kantara-Accredited Assessor.” View our growing list of Kantara-Accredited Assessors and Approved Services: https://kantarainitiative.org/trust-registry/ktr-status-list/

      Scott Perry, Principal at Scott S. Perry CPA, PLLC said, “We view our accreditation to perform Kantara Assessments as a key milestone in extending our digital trust services to ICAM and industry Trust Services Providers. We’re proud to be the only CPA firm to offer Kantara Assessments.”

      A global organization, Kantara Initiative Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems.

      Kantara Initiative has it in its mission to further harmonize and extend the Identity Assurance program to address multiple industries and international jurisdictions. Kantara Initiative is already approved as a US Federal Trust Framework Provider.

      The key benefits of participation Kantara Initiative Trust Framework Program participation include: rapid on boarding of partners and customers, interoperability of technical and policy deployments, an enhanced user experience, competition and collaboration with industry peers. The Kantara Initiative Trust Framework Program drives toward modular, agile, portable, and scalable assurance to connect business, governments, customers, and citizens. Join Kantara Initiative now to participate in the leading edge of trusted identity innovation development. For further information or to accelerate your business by becoming Kantara Accredited or Approved contact secretariat@kantarainitaitive.org  

      About Kantara Initiative

      Kantara Initiative is an industry and community organization that enables trust in identity services through our compliance programs, requirements development, and information sharing among communities including: industry, research & education, government agencies and international stakeholders. http://www.kantarainitiative.org  

      About Scott S. Perry CPA, PLLC

      Scott S. Perry CPA, PLLC is a Registered CPA Firm specializing in Technology Audits. The Firm is a global leader in Public Key Infrastructure (PKI), Service Organization Controls (SOC), WebTrust, Kantara, ISO 27001, and Sarbanes-Oxley (SOX) Audits. To learn more of the Firm’s services and qualifications, please visit www.scottperrycpa.com

      Radovan Semančík - nLightComparing Disasters [Technorati links]

      March 24, 2015 06:09 PM

      A month ago I have described my disappointment with OpenAM. My rant obviously attracted some attention in one way or another. But perhaps the best reaction came from Bill Nelson. Bill does not agree with me. Quite the contrary. And he has some good points that I can somehow agree with. But I cannot agree with everything that Bill points out and I still think that OpenAM is a bad product. I'm not going to discuss each and every point of Bill's blog. I would summarize it like this: if you build on shabby foundation your house will inevitably turn to rubble sooner or later. If a software system cannot be efficiently refactored it is as good as dead.

      However this is not what I wanted to write about. There is something much more important than arguing about the age of OpenAM code. I believe that OpenAM is a disaster. But it is an open source disaster. Even if it is bad I was able to fix it and make it work. It was not easy and it consumed some time and money. But it is still better than my usual experience with the support of closed-source software vendors. Therefore I believe that any closed-source AM system is inherently worse than OpenAM. Why is that, you ask?

      Firstly, I was able to fix OpenAM by just looking at the source code. Without any help from ForgeRock. Nobody can do this for closed source system. Except the vendor. Running system is extremely difficult to replace. Vendors know that. The vendor can ask for an unreasonable sum of money even for a trivial fix. Once the system is up and running the customer is trapped. Locked in. No easy way out. Maybe some of the vendors will be really nice and they won't abuse this situation. But I would not bet a penny on that.

      Secondly, what are the chances of choosing a good product in the first place? Anybody can have a look at the source code and see what OpenAM really is before committing any money to deploy it. But if you are considering a closed-source product you won't be able to do that. The chances are that the product you choose is even worse. You simply do not know. And what is even worse is that you do not have any realistic chance to find it out until it is too late and there is no way out. I would like to believe that all software vendors are honest and that all glossy brochures tell the truth. But I simply know that this is not the case...

      Thirdly, you may be tempted to follow the "independent" product reviews. But there is a danger in getting advice from someone who benefits from cooperation with the software vendors. I cannot speak about the whole industry as I'm obviously not omniscient. But at least some major analysts seem to use evaluation methodologies that are not entirely transparent. And there might be a lot of motivations at play. Perhaps the only way to be sure that the results are sound is to review the methodology. But there is a problem. The analysts are usually not publishing details about the methodologies. Therefore what is the real value of the reports that the analysts distribute? How reliable are they?

      This is not really about whether product X is better than product Y. I believe that this is an inherent limitation of the closed-source software industry. The risk of choosing inadequate product is just too high as the customers are not allowed to access the data that are essential to make a good decision. I believe in this: the vendor that has a good product does not need to hide anything from the customers. So there is no problem for such a vendor to go open source. If the vendor does not go open source then it is possible (maybe even likely) that there is something he needs to hide from the customers. I recommend to avoid such vendors.

      It will be the binaries built from the source code that will actually run in your environment. Not the analyst charts, not the pitch of the salesmen, not even the glossy brochures. The source code is only thing that really matters. The only thing that is certain to tell the truth. If you cannot see the source code then run away. You will probably save a huge amount of money.

      (Reposted from https://www.evolveum.com/comparing-disasters/)

      Vittorio Bertocci - MicrosoftIdentity Libraries: Status as of 03/23/2015 [Technorati links]

      March 24, 2015 06:18 AM

      image

      Time for another update to the libraries megadiagram! If you are nostalgic, you can find the old one here.

      So, what’s new? Well, we added an entire new target platform, .NET core and associated ASP.NET vNext – which resulted in a new drop of ADAL .NET 3.x and new OpenId Connect/OAuth2 middlewares (see the end of this guest post I wrote on the web developer tools team blog).

      Fuuuun Smile

      Gerry Beuchelt - MITRECI and CND – Revisited [Technorati links]

      March 24, 2015 12:54 AM
      About this time last year I discussed my thoughts on Counterintelligence (CI) and Computer Network Defense (CND). My basic proposition then was that CND is materially identical (or – more precisely – a monomorphism) to a restriction of CI to Cyber activities. I think that I was way to hesitant in making this claim. After Continue Reading →
      March 23, 2015

      Bill Nelson - Easy IdentityHacking OpenAM – An Open Response to Radovan Semancik [Technorati links]

      March 23, 2015 06:06 PM

       

      I have been working with Sun, Oracle and ForgeRock products for some time now and am always looking for new and interesting topics that pertain to theirs and other open source identity products.  When Google alerted me to the following blog posting, I just couldn’t resist:

      Hacking OpenAM, Level: Nightmare

      Radovan Semancik | February 25, 2015

      There were two things in the alert that caught my attention.  The first was the title and the obvious implications that it contained and the second is the author of the blog and the fact that he’s associated with Evolveum, a ForgeRock OpenIDM competitor.

      The identity community is relatively small and I have read many of Radovan’s postings in the past.  We share a few of the same mailing lists and I have seen his questions/comments come up in those forums from time to time.  I have never met Radovan in person, but I believe we are probably more alike than different.  We share a common lineage; both being successful Sun identity integrators.  We both agree that open source identity is preferable to closed source solutions.  And it seems that we both share many of the same concerns over Internet privacy.  So when I saw this posting, I had to find out what Radovan had discovered that I must have missed over the past 15 years in working with these products.  After reading his blog posting, however, I do not share his same concerns nor do I come to the same conclusions. In addition, there are several inaccuracies in the blog that could easily be misinterpreted and are being used to spread fear, uncertainty, and doubt around OpenAM.

      What follows are my responses to each of Radovan’s concerns regarding OpenAM. These are based on my experiences of working with the product for over 15 years and as Radovan aptly said, “your mileage may vary.”

      In the blog Radovan comments “OpenAM is formally Java 6. Which is a problem in itself. Java 6 does not have any public updates for almost two years.”

      ForgeRock is not stuck with Java 6.  In fact, OpenAM 12 supports Java 7 and Java 8.  I have personally worked for governmental agencies that simply cannot upgrade their Java version for one reason or another.  ForgeRock must make their products both forward looking as well as backward compatible in order to support their vast customer base.

      In the blog Radovan comments “OpenAM also does not have any documents describing the system architecture from a developers point of view.”


      I agree with Radovan that early versions of the documentation were limited.  As with any startup, documentation is one of the things that suffers during the initial phases, but over the past couple of years, this has flipped.  Due to the efforts of the ForgeRock documentation team I now find most of my questions answered in the ForgeRock documentation.  In addition, ForgeRock is a commercial open source company, so they do not make all high value documents publicly available.  This is part of the ForgeRock value proposition for subscription customers.

      In the blog Radovan comments “OpenAM is huge. It consists of approx. 2 million lines of source code. It is also quite complicated. There is some component structure. But it does not make much sense on the first sight.”


      I believe that Radovan is confusing the open source trunk with commercial open source product.  Simply put, ForgeRock does not include all code from the trunk in the OpenAM commercial offering.  As an example the extensions directory, which is not part of the product, has almost 1000 Java files in it.

      More importantly, you need to be careful in attempting to judge functionality, quality, and security based solely on the number of lines of code in any product.  When I worked at AT&T, I was part of a development team responsible for way more than 2M lines of code.  My personal area of responsibility was directly related to approximately 250K lines of code that I knew inside and out.  A sales rep could ask me a question regarding a particular feature or issue and I could envision the file, module, and even where in the code the question pertained (other developers can relate to this).  Oh, and this code was rock solid.

      In the blog Radovan comments that the “bulk of the OpenAM code is still efficiently Java 1.4 or even older.”


      Is this really a concern?  During the initial stages of my career as a software developer, my mentor beat into my head the following mantra:

      If it ain’t broke, don’t fix it!

      I didn’t always agree with my mentor, but I was reminded of this lesson each time I introduced bugs into code that I was simply trying to make better.  Almost 25 years later this motto has stuck with me but over time I have modified it to be:

      If it ain’t broke, don’t fix it, unless there is a damn good reason to do so!

      It has been my experience that ForgeRock follows a mantra similar to my modified version.  When they decide to refactor the code, they do so based on customer or market demand not just because there are newer ways to do it.  If the old way works, performance is not limited, and security is not endangered, then why change it.   Based on my experience with closed-source vendors, this is exactly what they do; their source code, however, is hidden so you don’t know how old it really is.

      A final thought on refactoring.  ForgeRock has refactored the Entitlements Engine and the Secure Token Service (both pretty mammoth projects) all while fixing bugs, responding to RFEs, and implementing new market-driven features such as:

      In my opinion, ForgeRock product development is focused on the right areas.

      In the blog Radovan comments “OpenAM is in fact (at least) two somehow separate products. There is “AM” part and “FM” part.”


      From what I understand, ForgeRock intentionally keeps the federation code independent. This was done so that administrators could easily create and export a “Fedlet” which is essentially a small web application that provides a customer with the code they need to implement SAML in a non-SAML application.  In short, keeping it separate allows for sharing between the OpenAM core services and providing session independent federation capability.  Keeping federation independent has also made it possible to leverage the functionality in other products such as OpenIG.

      In the blog Radovan comments “OpenAM debugging is a pain. It is almost uncontrollable, it floods log files with useless data and the little pieces of useful information are lost in it.“


      There are several places that you can look in order to debug OpenAM issues and where you look depends mostly on how you have implemented the product.

      I will agree with Radovan’s comments that this can be intimidating at first, but as with most enterprise products, knowing where to look and how to interpret the results is as much of an art as it is a science.  For someone new to OpenAM, debugging can be complex.  For skilled OpenAM customers, integrators, and ForgeRock staff, the debug logs yield a goldmine of valuable information that often assists in the rapid diagnosis of a problem.

      Note:  Debugging the source code is the realm of experienced developers and ForgeRock does not expect their customers to diagnose product issues.

      For those who stick strictly to the open source version, the learning curve can be steep and they have to rely on the open source community for answers (but hey, what do you want for free).  ForgeRock customers, however, will most likely have taken some training on the product to know where to look and what to look for.  In the event that they need to work with ForgeRock’s 24×7 global support desk, then they will most likely be asked to capture these files (as well as configuration information) in order to submit a ticket to ForgeRock.

      In the blog Radovan comments that the “OpenAM is still using obsolete technologies such as JAX-RPC. JAX-RPC is a really bad API.” He then goes on to recommend Apache CXF and states “it takes only a handful of lines of code to do. But not in OpenAM.”

      Ironically, OpenAM 12 has a modern REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).  ForgeRock began migrating away from JAX-RPC towards REST-based web services as early as version 11.0.  Now with OpenAM 12, ForgeRock has a modern (fully documented) REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).

      ForgeRock’s commitment to REST is so strong, in fact, that they have invested heavily in the ForgeRock Common REST (CREST) Framework and API – which is used across all of their products.  They are the only vendor that I am aware of that provides REST interfaces across all products in their IAM stack.  This doesn’t mean, however, that ForgeRock can simply eliminate JAX-RPC functionality from the product.  They must continue to support JAX-RPC to maintain backwards compatibility for existing customers that are utilizing this functionality.

      In the blog Radovan comments “OpenAM originated between 1998 and 2002. And the better part of the code is stuck in that time as well.”


      In general, Radovan focuses on very specific things he does not like in OpenAM, but ignores all the innovations and enhancements that have been implemented since Sun Microsystems.  As mentioned earlier, ForgeRock has continuously refactored, rewritten, and added several major new features to OpenAM.

      “ForgeRock also has a mandatory code review process for every code modification. I have experienced that process first-hand when we were cooperating on OpenICF. This process heavily impacts efficiency and that was one of the reasons why we have separated from OpenICF project.”

      I understand how in today’s Agile focused world there is the tendency to shy away from old school concepts such as design reviews and code reviews.  I understand the concerns about how they “take forever” and “cost a lot of money”, but consider the actual cost of a bug getting out the door and into a customer’s environment.  The cost is born by both the vendor and the customer but ultimately it is the vendor who incurs a loss of trust, reputation, and ultimately customers.  Call me old school, but I will opt for code reviews every time – especially when my customer’s security is on the line.

      Note:  there is an interesting debate on the effectiveness of code reviews on Slashdot.

      Conclusion

      So, while I respect Radovan’s opinions, I don’t share them and apparently neither do many of the rather large companies and DOD entities that have implemented OpenAM in their own environments.  The DOD is pretty extensive when it comes to product reviews and I have worked with several Fortune 500 companies that have had their hands all up in the code – and still choose to use it.  I have worked with companies that elect to have a minimal IAM implementation team (and rely on ForgeRock for total support) to those that have a team of developers building in and around their IAM solution.  I have seen some pretty impressive integrations between OpenAM log files, debug files, and the actual source code using tools such as Splunk.  And while you don’t need to go to the extent that I have seen some companies go in getting into the code, knowing that you could if you wanted to is a nice thing to have in your back pocket.  That is the benefit of open source code and one of the benefits of working with ForgeRock in general.

      I can remember working on an implementation for one rather large IAM vendor where we spent more than three months waiting for a patch.  Every status meeting with the customer became more and more uncomfortable as we waited for the vendor to respond.  With ForgeRock software, I have the opportunity to look into the code and put in my own temporary patch if necessary.  I can even submit the patch to ForgeRock and if they agree with the change (once it has gone through the code review), my patch can then be shared with others and become supported by ForgeRock.

      It is the best of both worlds, it is commercial open source!

       

       

       


      KatasoftHow to Manage API Authentication Lifecycle on Mobile Devices [Technorati links]

      March 23, 2015 03:00 PM

      If you didn’t catch it, in the last article I explained how to know to build and deploy a real mobile app that uses OAuth2 authentication for your private API service.

      In this article, I’m going to cover a tightly related topic: how to properly manage your OAuth2 API token lifecycle.

      Because things like token expiration and revocation are so paramount to API security, I figured they deserved their own discussion here.

      Token Expiration

      One of the most common questions we get here at Stormpath, when talking about token authentication for mobile devices, is about token expiration.

      Developers typically ask us this:

      “This OAuth2 stuff with JSON Web Tokens sounds good, but how long should I allow my access tokens to exist before expiring them? I don’t want to force my users to re-authenticate every hour. That would suck.”

      This is an excellent question. The answer is a bit tricky though. Here are some general rules:

      If you’re dealing with any form of sensitive data (money, banking data, etc.), don’t bother storing access tokens on the mobile device at all. When you authenticate the user and get an access token, just keep it in memory. When a users closes your app, your memory will be cleaned up, and the token will be gone. This will force users to log into your app every time they open it, but that’s a good thing.

      For extra security, make sure your tokens themselves expire after a short period of time (eg: 1 hour) — this way, even if an attacker somehow compromises your access token, it’ll still expire fairly quickly.

      If you’re building an app that holds sensitive data, that’s not related to money, you’re probably fine forcing tokens to expire somewhere around the range of every month. For instance, if I was building a mobile app that allowed users to take fitness progress photos of themselves to review at a later time, I’d use a 1 month setting.

      The above setting is a good idea as it doesn’t annoy users, requiring them to re-input their credentials every time the open the app, but also doesn’t expose them to unnecessary risk. In the worst case scenario above, if a user’s access token is compromised, an attacker might be able to view this person’s progress photos for up to one month.

      If you’re building a massive consumer application, like a game or social application, you should probably use a much more liberal expiration time: anywhere from 6 months to 1 year.

      For these sorts of applications, there is very little risk storing an access token for a long period of time, as the service contains only low-value content that can’t really hurt a user much if leaked. If a token is compromised, it’s not the end of the world.

      This strategy also has the benefit of not annoying users by prompting them to re-authenticate very frequently. For many mass consumer applications, signing in is considered a big pain, so you don’t want to do anything to break down your user experience.

      Token Revocation

      Let’s now talk about token revocation. What do you do if an access token is compromised?

      Firstly, let’s discuss the odds of this happening. In general: they are very low. Using the recommended data stores for Android and iOS will greatly reduce the risk of your tokens being compromised, as the operating system provides a lot of built-in protections for storing sensitive data like access tokens.

      But, let’s assume for this exercise that a user using your mobile app lost their phone, a saavy hacker grabbed it, broke through the OS-level protections, and was able to extract your API service’s access token.

      What do you do?

      This is where token revocation comes into play.

      It is, in general, a good idea to support token revocation for your API service. What this means is that you should have a way to strategically invalidate tokens after issuing them.

      NOTE: Many API services do not support token revocation, and as such, simply rely on token expiration times to handle abuse issues.

      Supporting token revocation means you’ll have to go through an extra few steps when building this stuff out:

      1. You’ll need to store all access tokens (JWTs) that you generate for clients in a database. This way, you can see what tokens you’ve previously assigned, and which ones are valid.

      2. You’ll need to write an API endpoint which accepts an access token (or user credentials) and removes either the specific access token or all access tokens from a user’s account.

      For those of you wondering how this works, the official OAuth2 Revocation Spec actually talks about it in very simple terms. The gist of it is that you write an endpoint like /revoke that accepts POST requests, with the token or credentials in the body of the request.

      The idea is basically this though: once you know that a given access token, or user account has been compromised, you’ll issue the appropriate revocation API request to your private API service. You’ll either revoke:

      Make sense? Great!

      Simpler Solutions

      If you’re planning on writing your own API service like the ones discussed in this article, you’ll want to write as little of the actual security code as possible. Actual implementation details can be quite a bit more complex, depending on your framework and programming langugage.

      Stormpath is an API service that stores your user accounts securely, manages API keys, handles OAuth2 flows, and also provides tons of convenience methods / functions for working with user data, doing social login, and a variety of other things.

      If you have any questions (this stuff can be confusing), feel free to email us directly — we really don’t mind answering questions! You can, of course, also just leave a comment below. Either way works!

      -Randall

      Julian BondCalifornia is in the grip of a record drought tied to climate change. This water crisis holds the potential... [Technorati links]

      March 23, 2015 02:01 PM
      I wonder if anyone appreciates how serious, how close and how inevitable this is. Are the answers really: "extremely", "24 months" and "totally"?

      Bill Smith originally shared this post:
      California is in the grip of a record drought tied to climate change. This water crisis holds the potential to collapse California’s economy if the state truly runs out of water. What an irony that the state most focused on global warming may be its first victim.
      California anchors U.S. economy. It has the seventh largest economy in the world, approximately twice the size of Texas. California’s economy is so large and impacts so many other businesses that its potential collapse due to a water crisis will impact the pocketbooks of most Americans.

      #California #Drought

      


       Climate Change Puts California Economy at Risk of Collapse »
      California faces one more year of water supply -- a water crisis that holds the potential to collapse the state’s economy. What an irony that the state most focused on global warming may be its first catastrophic economic collapse victim.

      [from: Google+ Posts]

      KatasoftThe Ultimate Guide to Mobile API Security [Technorati links]

      March 23, 2015 02:00 PM

      Mobile API consumption is a topic that comes up frequently on both Stack Overflow and the Stormpath support channel. It’s a problem that has already been solved, but requires a lot of prerequisite knowledge and sufficient understanding in order to implement properly.

      This post will walk you through everything you need to know to properly secure a REST API for consumption on mobile devices, whether you’re building a mobile app that needs to access a REST API, or writing a REST API and planning to have developers write mobile apps that work with your API service.

      My goal is to not only explain how to properly secure your REST API for mobile developers, but to also explain how the entire exchange of credentials works from start to finish, how to recover from security breaches, and much more.

      The Problem with Mobile API Security

      Before we dive into how to properly secure your REST API for mobile developers — let’s first discuss what makes mobile authentication different from traditional API authentication in the first place!

      The most basic form of API authentication is typically known as HTTP Basic Authentication.

      The way it works is pretty simple for both the people writing API services, and the developers that consume them:

      HTTP Basic Authentication is great because it’s simple. A developer can request an API key, and easily authenticate to the API service using this key.

      What makes HTTP Basic Authentication a bad option for mobile apps is that you need to actually store the API key securely in order for things to work. In addition to this, HTTP Basic Authentication requires that your raw API keys be sent over the wire for every request, thereby increasing the chance of exploitation in the long run (the less you use your credentials, the better).

      In most cases, this is impractical as there’s no way to safely embed your API keys into a mobile app that is distributed to many users.

      For instance, if you build a mobile app with your API keys embedded inside of it, a savvy user could reverse engineer your app, exposing this API key, and abusing your service.

      This is why HTTP Basic Authentication is not optimal in untrusted environments, like web browsers and mobile applications.

      NOTE: Like all authentication protocols, HTTP Basic Authentication must be used over SSL at all times.

      Which brings us to our next section…

      Introducing OAuth2 for Mobile API Security

      You’ve probably heard of OAuth before, and the debate about what it is and is not good for. Let’s be clear: OAuth2 is an excellent protocol for securing API services from untrusted devices, and it provides a nice way to authenticate mobile users via what is called token authentication.

      Here’s how OAuth2 token authentication works from a user perspective (OAuth2 calls this the password grant flow):

      1. A user opens up your mobile app and is prompted for their username or email and password.
      2. You send a POST request from your mobile app to your API service with the user’s username or email and password data included (OVER SSL!).
      3. You validate the user credentials, and create an access token for the user that expires after a certain amount of time.
      4. You store this access token on the mobile device, treating it like an API key which lets you access your API service.
      5. Once the access token expires and no longer works, you re-prompt the user for their username or email and password.

      What makes OAuth2 great for securing APIs is that it doesn’t require you to store API keys in an unsafe environment. Instead, it will generate access tokens that can be stored in an untrusted environment temporarily.

      This is great because even if an attacker somehow manages to get a hold of your temporary access token, it will expire! This reduces damage potential (we’ll cover this in more depth in our next article).

      Now, when your API service generates an Oauth2 access token that your mobile app needs, of course you’ll need to store this in your mobile app somewhere.

      BUT WHERE?!

      Well, there are different places this token should be stored depending on what platform you’re developing against. If you’re writing an Android app, for instance, you’ll want to store all access tokens in SharedPreferences (here’s the API docs you need to make it work). If you’re an iOS developer, you will want to store your access tokens in the Keychain.

      If you still have questions, the following two StackOverflow posts will be very useful — they explain not only how you should store access tokens a specific way, but why as well:

      It’s all starting to come together now, right? Great!

      You should now have a high level of understanding in regards to how OAuth2 can help you, why you should use it, and roughly how it works.

      Which brings us to the next section…

      Access Tokens

      Let’s talk about access tokens for a little bit. What the heck are they, anyway? Are they randomly generated numbers? Are they uuids? Are they something else? AND WHY?!

      Great questions!

      Here’s the short answer: an access token can technically be anything you want:

      As long as you can:

      You’re golden!

      BUT… With that said, there are some conventions you’ll probably want to follow.

      Instead of handling all this stuff yourself, you can instead create an access token that’s a JWT (JSON Web Token). It’s a relatively new specification that allows you to generate access tokens that:

      JWTs also look like a randomly generated string: so you can always store them as strings when using them. This makes them really convenient to use in place of a traditional access token as they’re basically the same thing, except with way more benefits.

      JWTs are almost always cryptographically signed. The way they work is like so:

      Now, from the mobile client, you can view whatever is stored in the JWT. So if I have a JWT, I can easily check to see what JSON data is inside it. Usually it’ll be something like:

      {
        "user_id": "e3457285-b604-4990-b902-960bcadb0693",
        "scope": "can-read can-write"
      }
      

      Now, this is a 100% fictional example, of course, but you get the idea: if I have a copy of this JWT token, I can see the JSON data above, yey!

      But I can also verify that it is still valid, because the JWT spec supports expiring tokens automatically. So when you’re using your JWT library in whatever language you’re writing, you’ll be able to verify that the JWT you have is valid and hasn’t yet expired (cool).

      This means that if you use a JWT to access an API service, you’ll be able to tell whether or not your API call will work by simply validating the JWT! No API call required!

      Now, once you’ve got a valid JWT, you can also do cool stuff with it on the server-side.

      Let’s say you’ve given out a JWT to a mobile app that contains the following data:

      {
        "user_id": "e3457285-b604-4990-b902-960bcadb0693",
        "scope": "can-read can-write"
      }
      

      But let’s say some malicious program on the mobile app is able to modify your JWT so that it says:

      {
        "user_id": "e3457285-b604-4990-b902-960bcadb0693",
        "scope": "can-read can-write can-delete"
      }
      

      See how I added in the can-delete permission there? What will happen if this modified token is sent to our API server? Will it work? Will our server accept this modified JWT?

      NOPE!!

      When your API service receives this JWT and validates it, it’ll do a few things:

      This is nice functionality, as it makes handling verification / expiration / security a lot simpler.

      The only thing you need to keep in mind when working with JWTs is this: you should only store stuff you don’t mind exposing publicly.

      As long as you follow the rule above, you really can’t go wrong with using JWTs.

      The two pieces of information you’ll typically store inside of a JWT are:

      So, that just about sums up JWTs. Hopefully you now know why you should be using them as your OAuth access tokens — they provide:

      Now, moving on — let’s talk about how this all works together…

      How it All Works

      In this section we’re going to get into the nitty gritty and cover the entire flow from start to finish, with all the low-level technical details you need to build a secure API service that can be securely consumed from a mobile device.

      Ready? Let’s do this.

      First off, here’s how things will look when we’re done. You’ll notice each image has a little picture next to it. That’s because I’m going to explain each step in detail below.

      OAuth2 Flow

      So, take a look at that image above, and then follow along.

      1. User Opens App

      The user opens the app! Next!

      2. App Asks for Credentials

      Since we’re going to be using the OAuth2 password grant type scheme to authenticate users against our API service, your app needs to ask the user for their username or email and password.

      Almost all mobile apps ask for this nowadays, so users are used to typing their information in.

      3. User Enters their Credentials

      Next, the user enters their credentials into your app. Bam. Done. Next!

      4. App Sends POST Requests to API Service

      This is where the initial OAuth2 flow begins. What you’ll be doing is essentially making a simple HTTP POST request from your mobile app to your API service.

      Here’s a command line POST request example using cURL:

      $ curl --form 'grant_type=password&username=USERNAMEOREMAIL&password=PASSWORD' https://api.example.com/v1/oauth
      

      What we’re doing here is POST’ing the username or email and password to our API service using the OAuth2 password grant type: (there are several grant types, but this is the one we’ll be talking about here as it’s the only relevant one when discussing building your own mobile-accessible API).

      NOTE: See how we’re sending the body of our POST request as form content? That is, application/www-x-form-urlencoded? This is what the OAuth2 spec wants =)

      5. API Server Authenticates the User

      What happens next is that your API service retrieves the incoming username or email and password data and validates the user’s credentials.

      This step is very platform specific, but typically works like so:

      1. You retrieve the user account from your database by username or email.
      2. You compare the password hash from your database to the password received from the incoming API request. NOTE: Hopefully you store your passwords with bcrypt!
      3. If the credentials are valid (the user exists, and the password matches), then you can move onto the next step. If not, you’ll return an error response to the app, letting it know that either the username or email and password are invalid.

      6. API Server Generates a JWT that the App Stores

      Now that you’ve authenticated the app’s OAuth2 request, you need to generate an access token for the app. To do this, you’ll use a JWT library to generate a useful access token, then return it to the app.

      Here’s how you’ll do it:

      1. Using whatever JWT library is available for your language, you’ll create a JWT that includes JSON data which holds the user ID (from your database, typically), all user permissions (if you have any), and any other data you need the app to immediately access.

      2. Once you’ve generated a JWT, you’ll return a JSON response to the app that looks something like this:

         {
           "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY",
           "token_type": "bearer",
           "expires_in": 3600
         }
        

        As you can see above, our JSON response contains 3 fields. The first field access_token, is the actual OAuth2 access token that the mobile app will be using from this point forward in order to make authenticated API requests.

        The second field, token_type, simply tells the mobile app what type of access token we’re providing — in this case, we’re providing an OAuth2 Bearer token. I’ll talk about this more later on.

        Lastly, the third field provided is the expires_in field. This is basically the number of seconds for which the supplied access token is valid.

        In the example above, what we’re saying is that we’re giving this mobile app an access token which can be used to access our private API for up to 1 hour — no more. After 1 hour (3600 seconds) this access token will expire, and any future API calls we make using that access token will fail.

      3. On the mobile app side of things, you’ll retrieve this JSON response, parse out the access token that was provided by the API server, and then store it locally in a secure location. On Android, this means SharedPreferences, on iOS, this means Keychain.

      Now that you’ve got an access token securely stored on the mobile device, you can use it for making all subsequent API requests to your API server.

      Not bad, right?

      7. App Makes Authenticated Requests to API Server

      All that’s left to do now is to make secure API requests from your mobile app to your API service. The way you do this is simple.

      In the last step, your mobile app was given an OAuth2 access token, which it then stored locally on the device.

      In order to successfully make API requests using this token, you’ll need to create an HTTP Authorization header that uses this token to identify your user.

      To do this, what you’ll do is insert your access token along with the word Bearer into the HTTP Authorization header. Here’s how this might look using cURL:

      $ curl -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY" https://api.example.com/v1/test
      

      In the end, your Authorization header will look like this: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY.

      When your API service receives the HTTP request, what it will do is this:

      1. Inspect the HTTP Authorization header value, and see that it starts with the word Bearer.

      2. Next, it’ll grab the following string value, referring to this as the access token.

      3. It’ll then validate this access token (JWT) using a JWT library. This step ensures the token is valid, untampered with, and not yet expired.

      4. It’ll then retrieve the user’s ID and permissions out of the token (permissions are optional, of course).

      5. It’ll then retrieve the user account from the user database.

      6. Lastly, it will ensure that what the user is trying to do is allowed, eg: the user must be allowed to do what they’re trying to do. After this is done, the API server will simply process the API request and return the result normally.

      Nothing anything familiar about this flow? You should! It’s almost the exact same way HTTP Basic Authentication works, with one main difference in execution: the HTTP Authorization header is slightly different (Bearer vs Basic).

      This is the end of the “How it All Works” section. In the next article, we’ll talk about all the other things you need to know about managing API authentication on mobile devices.

      Simpler Solutions

      As this is a high level article meant to illustrate how to properly write an API service that can be consumed from mobile devices, I’m not going to get into language specific implementation details here — however, I do want to cover something I consider to be very important.

      If you’re planning on writing your own API service like the ones discussed in this article, you’ll want to write as little of the actual security code as possible. While I’ve done my best to summarize exactly what needs to be done in each step in the process, actual implementation details can be quite a bit more complex.

      It’s usually a good idea to find a popular OAuth2 library for your favorite programming language or framework, and use that to help offload some of the burden of writing this sort of thing yourself.

      Lastly, if you really want to simplify things, you might want to sign up for our service: Stormpath. Stormpath is an API service that stores your user accounts securely, manages API keys, handles OAuth2 flows, and also provides tons of convenience methods / functions for working with user data, doing social login, and a variety of other things.

      Stormpath is also totally, 100% free to use. You can start using it RIGHT NOW in your applications, and BAM, things will just work. We only charge you for real projects — feel free to deploy as many side projects as you’d like on our platform for no cost =)

      Hopefully this article has helped you figure out the best way to handle API authentication for your mobile devices. If you have any questions (this stuff can be confusing), feel free to email us directly!

      -Randall

      Nat Sakimuraオバマ大統領、デビッド・リコードンをホワイトハウスの「情報技術長官(?)」に抜擢 [Technorati links]

      March 23, 2015 12:53 PM

      photo by Brian Solis (2009) CC-BY。今はもうちょっと老けていると思われ。

      やや旧聞ですね。日本時間の金曜日の朝に仲間内ではひとしきり盛り上がったのですが、いかんせん金曜日は時間がなく、その後体調を崩してずっと今さっきまで寝ていたので…。

      Yahoo! Techによる、デビッドがホワイトハウスの「director of information technology」に抜擢されたというニュース[1]です。これって、日本語だと、「情報技術長官」で良いのですかね…。Wikipediaの米国政府用語[2]によると、Directorは「長官」らしいので…。(誰か詳しい人教えて…。)

      彼は、米国のOpenID® Foundation立ち上げの立役者兼初代副理事長で、OpenID® Authentication 2.0の主著者でもありますね。当時はSixApart→Verisign Laboで働いていたのですが、その後Facebookに行って、FacebookのIdentityのOAuth 2.0化を途中までやって[3]、Open Compute Project[4] の方に行ってそちらでも業績を残しました。

      写真的にはオッサンに見えますが、まだ20代です。わりと親日的。日本に来た時には、一緒に食事に行ったりしています。服装はTシャツ短パンサンダルが定番。ホワイトハウスに入ったら、さすがにスーツ姿になるのでしょうかね(w。

      ホワイトハウスの声明[5]によると、ホワイトハウスでの彼の役割は

      だそうです。

      ちなみに、ホワイトハウスは、来年度の予算として、25省庁のデジタル・チームの編成に$105M (約120億円)を要求しているらしいので、これからもシリコンバレーからの引き抜きが続くでしょう。

      多分、みんな給料は下がるんですが、その後民間に戻った時の給料がぐっと上がるんですよね。だから、キャリア形成の一環として政府に行く。デービッドの場合も、オバマ政権はあと2年なわけで、その間政府で働いて、また民間に戻るつもりでしょう。

      日本は政府にやる気があっても、民間に戻るのがなかなか厳しそうですしね…。それに、体制的に力を振るえるかも謎ですし。悩ましいところです。

       

      [1] Alyssa Bereznak, “Exclusive: Facebook Engineering Director Is Headed to the White House”, (2015-03-19), Yahoo! Tech,  https://www.yahoo.com/tech/exclusive-facebook-engineering-director-is-headed-114060505054.html

      [2] Wikipedia 米国政府用語一覧 http://ja.wikipedia.org/wiki/%E7%B1%B3%E5%9B%BD%E6%94%BF%E5%BA%9C%E7%94%A8%E8%AA%9E%E4%B8%80%E8%A6%A7

      [3] 結局、そこで止まっているのがなんとも…。なので、FBは未だにOAuth 2.0 draft 10とかそのくらい…。

      [4] ざっくり言うと、GoogleやFacebookスタイルのサーバをオープン化して普及しようというもの。http://www.opencompute.org/

      [5] Anita Breckenridge, “President Obama Names David Recordon as Director of White House Information Technology”, The Whitehouse Blog, (2015-03-19),  https://www.whitehouse.gov/blog/2015/03/19/president-obama-names-david-recordon-director-white-house-information-technology

      [6] Mariella Moon, “White House names top Facebook engineer as first director of IT“, Engadgethttp://www.engadget.com/2015/03/20/white-house-recordon-facebook-director-it/

      Nat Sakimura夏井先生の日本の個人情報保護関連法制に関する論評に激しく同意した件 [Technorati links]

      March 23, 2015 11:50 AM

      夏井先生のプログ[1]に、いわゆる「プライバシーフリーク本」[2]の論評に形を借りた、日本の個人情報保護法制に関する論評が載っていた。激しく同意だ。

      先生曰く

       「個人情報保護法は行政規範だ」という当たり前の常識をもっと徹底して周知する必要がある

      これ、法律家はさておき、一般には全く理解されていないんじゃないかと思うんですよね。なので、先日のOpenID BizDay[3]では、ここのところを大きく取り上げたのです。

      基本に戻って不法行為法(Tort)の法解釈論として考えた方が妥当な結論を得られる場合が多い
      (…[中略]…)
      その際に最も参考になるのは,やはりプロッサーの類型論で,このような古典的理論構成のほうが実は有用性が高い。

      技術的にも実は親和性が高いのも良い所です。こうした類型論は、そのまま「Objectives」と「Threat」に転換できるので、「Control」も設計しやすいのですよね。

      しかし,個人情報保護法の適用によっては個人情報の本人の直接的な被害救済にはつながらない。そのような法律では
      ないのだ。私見としては,欠陥が多いので,全面改正すべきだと思っている。

      これは、多くの人が感じていながら言えていなかったことかも知れません。
      「最近私もゼロクリアで書いてみたら?」ということはほうぼうで言っていますがが。

      日本の法学者が本来やるべきことは,日本国の民法に規定されている不法行為法(特に723条)の解釈・運用でどうにかやれないかどうか可能な全てを努力を尽くすということだと思う。

      これも本当にそのとおりだと思います。アメリカで話していると、まずそこから始まりますし。実効的なプライバシー保護はアメリカのほうがEUより効いているのではないかというのも、そこそこ聞きますよね。ただ、アメリカと違って日本はこの方面のいろいろな手段が整っていないようで、そのあたりを整えて行くというタスクもありそうです。

      一方で、EUのデータ保護行政を使ったWTO抜け道問題に対する対処とかもあるので、実体的なプライバシー保護とは別に、貿易摩擦問題の外交的手段としての個人情報保護法というのももちろん有用なわけで、だったらそっちに舵を切らなきゃいけないわけですが、どうにも中途半端ですよね、今のままでは。


      [1] 夏井高人: “鈴木正朝・高木浩光・山本一郎『ニッポンの個人情報 -「個人を特定する情報が個人情報である」と信じているすべての方へ』”, サイバー法ブログ, (2015/3/23), http://cyberlaw.cocolog-nifty.com/blog/2015/03/post-bafd.html


      [2] 鈴木正朝・高木浩光・山本一郎『ニッポンの個人情報 -「個人を特定する情報が個人情報である」と信じているすべての方へ』,  翔泳社 (2015/2/20)


      [3] 崎村夏彦:『セミナー:企業にとっての実践的プライバシー保護~個人情報保護法は免罪符にはならない』, @_Nat Zone, (2015-03-01) http://www.sakimura.org/2015/03/2911/

       

      Christopher Allen - AlacrityMini Resume Card for Conference Season [Technorati links]

      March 23, 2015 06:55 AM

      Between the business of the March/April conference season and leaving Blackphone, I've run out of business cards. Rather than rush to print a bunch of new ones, I'm created this mini-resume for digital sharing and a two-sided Avery business card version that I am printing on my laser printer and sharing.

      Not as pretty as my old Life With Alacrity cards, but effective in getting across the diversity of my professional experience and interests.

      Christopher Allen Micro Resume

      As someone who teaches Personal Branding in my courses at BGI@Pinchot.edu, I always find it hard to practice as I preach to ask for advice and suggestions. In this case I'm trying to tame my three-headed Cerebus of a profession with Privacy/Crypto/Developer Community, an Innovative Business Educator/Instructional Designer head, and my Collaborative Tools, Processes, Games and Play head. All come tied together in my body as ultimately being about collaboration, but it is hard to explain some of the correspondences.

      March 22, 2015

      Julian BondElectronic music, released on cassette labels, from Novosibirsk. [Technorati links]

      March 22, 2015 07:24 PM
      Electronic music, released on cassette labels, from Novosibirsk.

      http://calvertjournal.com/articles/show/3744/siberian-electronic-music-scene-klammlang-cassettes
       Breaking the ice: the independent cassette label putting Siberian electronica on the map »
      A feature about the Klammklang label and the Siberian electronic music scene

      [from: Google+ Posts]
      March 21, 2015

      Julian BondIf Europe is getting worried about immigrants from N Africa taking the perilous journey to Italy, there's... [Technorati links]

      March 21, 2015 06:30 PM
      If Europe is getting worried about immigrants from N Africa taking the perilous journey to Italy, there's an obvious solution. Make the countries of the southern and eastern mediterranean part of the EU.
      https://deepresource.wordpress.com/2015/03/21/europe-defends-itself/

      Note: I hate contentious (and non-contentious) blogs that don't allow comments. But then I don't allow comments on my own blog because I can't be bothered to moderate them.
       Europe Defends Itself »
      German NWO-magazine and US State Department mouth piece der Spiegel hates to say it, but Europe seems to be getting serious about defending itself against the waves of invaders from the South. In t...

      [from: Google+ Posts]

      Kantara InitiativeNon-Profits on the Loose @ RSA 2015 [Technorati links]

      March 21, 2015 01:23 AM

      Tuesday April 21st from 5-8pm @ the Minna Gallery
      Join Kantara and partners at the 2015 “Non-Profits On the Loose.”

      NPOTL-2015

      How to get in to the social:

      Enter using your RSA badge or bring the invite below for exclusive access to this annual networking event. Break bread with leading movers and shakers from the Identity Management and Cyber Security industries. We look forward to seeing you there! PS – If you tweet about the event please use @kantaranews #NPOTL

      Try the “UMArtini”:

      Celebrate the achievements of Kantara’s award winning UMA – User Managed Access Work Group in shaping identity for a connected world.

      “After a hard day of personal data sharing, you’ll welcome the UMArtini, a classic vodka (or gin?) martini with a token splash of olive brine and — with your consent — garnished with an olive.”

      Thanks to our Sponsors:

      Kantara extends gratitude to our generous sponsors: Experian, ForgeRock, and the IEEE-SA. Their support enables this event to provide community heavy-hitter networking and fun.

       NPOTL 2015 Sponsors!

      IEEE_SA_Logo EXPERIAN_FINAL LOGO forgerock-new

       

      March 20, 2015

      Vittorio Bertocci - MicrosoftAzure AD Token Lifetime [Technorati links]

      March 20, 2015 04:10 PM

      For how long are AAD-issued tokens valid? I have mentioned this in scattered posts, but this AM Danny reminded me of how frequent this Q really is – and as such, it deserves its own entry.

      As of today, the rules are pretty simple:

      That’s it, short and sweet Smile

      GluuOSCON: Crypto For Kids [Technorati links]

      March 20, 2015 12:02 AM

      OSCON_LOGO

       

      Description

      Crypto is the ultimate secret message machine! This workshop will first introduce the history of crypto, and some of the basic mathematical underpinnings. Then through fun activities and games, kids will get some hands on experience using linux crypto tools and the python programming language.

      Abstract

      Without crypto, we could not have security or privacy on the Internet. You would not be able to pay for something on the Web. In fact, Crypto is more important than the Web–because every Internet service–email, video, voice communication–needs it when private communication is required.

      But how has crypto technology changed since World War II, when teams of English and American mathematicians broke Hitler’s enigma protocol? Where did modern crypto come from? Who invented it?

      In the two hours for this class, we’ll use some new open source tools for cryptography to help kids understand what makes the technology tick. We’ll highlight two-way encryption, public-private key encryption, crypto signing, X.509 certificates, hierarchical public key infrastructures, and physical access tokens. In the course of this, we’ll also introduce some basic python coding scripts that will enable the kids to send each other secret messages.

      March 19, 2015

      Bill Nelson - Easy IdentityOpenDJ Access Control Explained [Technorati links]

      March 19, 2015 11:04 PM

      PIIAn OpenDJ implementation will contain certain data that you would like to explicitly grant or deny access to.  Personally identifiable information (PII) such as a user’s home telephone number, their address, birth date, or simply their email address might be required by certain team members or applications, but it might be a good idea to keep this type of information private from others. On the other hand, you may want their office phone number published for everyone within the company to see but limit access to this data outside of the company.

      Controlling users’ access to different types of information forms the basis of access control in OpenDJ and consists of the following two stages:

      Before you are allowed to perform any action within OpenDJ, it must first know who you are.  Once your identity has been established, OpenDJ can then ascertain the rights you have to perform actions either on the data contained in its database(s) or within the OpenDJ process, itself.

       

      Stop

      Access Control = Authentication + Authorization

       

      Note:  Access control is not defined in any of the LDAP RFCs so the manner in which directory servers implement access control varies from vendor to vendor.  Many directory services (including OpenDJ) follow the LDAP v3 syntax introduced by Netscape.

       

      Access control is implemented with an operational attribute called aci (which stands for access control instruction).  Access control instructions can be configured globally (the entire OpenDJ instance) or added to specific directory entries.

       

      1.      Global ACIs:

       

      Global ACIs are not associated with directory entries and therefore are not available when searching against a typical OpenDJ suffix (such as dc=example,dc=com).  Instead, Global ACIs are considered configuration objects and may be found in the configuration suffix (cn=config).  You can find the currently configured Global ACIs by opening the config.ldif file and locating the entry for the “Access Control Handler”.  Or, you can search for “cn=Access Control Handler” in the configuration suffix (cn=config) as follows:

      ./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” -b “cn=config” -s sub “cn=Access Control Handler” ds-cfg-global-aci

       

      This returns the following results on a freshly installed (unchanged) OpenDJ server.

       

      dn: cn=Access Control Handler,cn=config

      ds-cfg-global-aci: (extop=”1.3.6.1.4.1.26027.1.6.1 || 1.3.6.1.4.1.26027.1.6.3 || 1.3.6.1.4.1.4203.1.11.1 || 1.3.6.1.4.1.1466.20037 || 1.3.6.1.4.1.4203.1.11.3″) (version 3.0; acl “Anonymous extended operation access”; allow(read) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: (target=”ldap:///”)(targetscope=”base”)(targetattr=”objectClass||namingContexts||supportedAuthPasswordSchemes||supportedControl||supportedExtension||supportedFeatures||supportedLDAPVersion||supportedSASLMechanisms||supportedTLSCiphers||supportedTLSProtocols||vendorName||vendorVersion”)(version 3.0; acl “User-Visible Root DSE Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: target=”ldap:///cn=schema”)(targetattr=”attributeTypes||objectClasses”)(version 3.0;acl “Modify schema”; allow (write)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

      ds-cfg-global-aci: target=”ldap:///cn=schema”)(targetscope=”base”)(targetattr=” objectClass||attributeTypes||dITContentRules||dITStructureRules||ldapSyntaxes||matchingRules||matchingRuleUse||nameForms||objectClasses”)(version 3.0; acl “User-Visible Schema Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: (target=”ldap:///dc=replicationchanges”)(targetattr=”*”)(version 3.0; acl “Replication backend access”; deny (all) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: (targetattr!=”userPassword||authPassword||changes||changeNumber||changeType||changeTime||targetDN||newRDN||newSuperior||deleteOldRDN”)(version 3.0; acl “Anonymous read access”; allow (read,search,compare) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: (targetattr=”audio||authPassword||description||displayName||givenName||homePhone||homePostalAddress||initials||jpegPhoto||labeledURI||mobile||pager||postalAddress||postalCode||preferredLanguage||telephoneNumber||userPassword”)(version 3.0; acl “Self entry modification”; allow (write) userdn=”ldap:///self”;)

      ds-cfg-global-aci: (targetattr=”createTimestamp||creatorsName||modifiersName||modifyTimestamp||entryDN||entryUUID||subschemaSubentry||etag||governingStructureRule||structuralObjectClass||hasSubordinates||numSubordinates”)(version 3.0; acl “User-Visible Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

      ds-cfg-global-aci: (targetattr=”userPassword||authPassword”)(version 3.0; acl “Self entry read”; allow (read,search,compare) userdn=”ldap:///self”;)

      ds-cfg-global-aci: (targetcontrol=”1.3.6.1.1.12 || 1.3.6.1.1.13.1 || 1.3.6.1.1.13.2 || 1.2.840.113556.1.4.319 || 1.2.826.0.1.3344810.2.3 || 2.16.840.1.113730.3.4.18 || 2.16.840.1.113730.3.4.9 || 1.2.840.113556.1.4.473 || 1.3.6.1.4.1.42.2.27.9.5.9″) (version 3.0; acl “Authenticated users control access”; allow(read) userdn=”ldap:///all”;)

      ds-cfg-global-aci: (targetcontrol=”2.16.840.1.113730.3.4.2 || 2.16.840.1.113730.3.4.17 || 2.16.840.1.113730.3.4.19 || 1.3.6.1.4.1.4203.1.10.2 || 1.3.6.1.4.1.42.2.27.8.5.1 || 2.16.840.1.113730.3.4.16 || 1.2.840.113556.1.4.1413″) (version 3.0; acl “Anonymous control access”; allow(read) userdn=”ldap:///anyone”;)

       

      2.      Entry-Based ACIs:

       

      Access control instructions may also be applied to any entry in the directory server.  This allows fine grained access control to be applied anywhere in the directory information tree and therefore affects the scope of the ACI.

      Note:  Placement has a direct effect on the entry where the ACI is applied as well as any children of that entry.

      You can obtain a list of all ACIs configured in your server (sans the Global ACIs) by performing the following search:

       

      ./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” –b “dc=example,dc=com” –s sub aci=* aci

       

      By default, there are no ACIs configured at the entry level.  The following is an example of ACIs that might be returned if you did have ACIs configured, however.

       

      dn: dc=example,dc=com

      aci: (targetattr=”*”)(version 3.0;acl “Allow entry search”; allow (search,read)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

      aci: (targetattr=”*”)(version 3.0;acl “Modify config entry”; allow (write)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

      aci: (targetcontrol=”2.16.840.1.113730.3.4.3″)(version 3.0;acl “Allow persistent search”; allow (search, read)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

      aci: (version 3.0;acl “Add config entry”; allow (add)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

      aci: (version 3.0;acl “Delete config entry”; allow (delete)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”); )

      dn: ou=Applications,dc=example,dc=com

      aci: (target =”ldap:///ou=Applications,dc=example,dc=com”)(targetattr=”*”)(version 3.0;acl “Allow Application Config Access to Web UI Admin”; allow (all)(userdn = “ldap:///uid=webui,ou=Applications,dc=example,dc=com”); )

      ACI Syntax:

       

      The syntax for access control instructions is not specific to OpenDJ, in fact, for the most part, it shares the same syntax with the Oracle Directory Server Enterprise Edition (“ODSEE”).  This is mainly due the common lineage with Sun Microsystems, but other directory servers do not use the same syntax and this makes migration more difficult (even the schema in both servers contains an attribute called aci).  If you export OpenDJ directory entries to LDIF and attempt to import them into another vendor’s server, the aci statements would either be ignored, or worse, might have unpredictable results, altogether.

      The following syntax is used by the OpenDJ server.

       

      ACISyntax

       

      Access control instructions require three inputs: target, permission, and subject. The target specifies the entries to which the aci applies. The subject applies to the client that is performing the operation and the permissions specify what the subject is allowed to do. You can create some very powerful access control based on these three inputs.

      The syntax also includes the version of the aci syntax, version 3.0. This is the aci syntax version, not the LDAP version. Finally, the syntax allows you to enter a human readable name. This allows you to easily search for and identify access control statements in the directory server.

      Note:  Refer to the OpenDJ Administration Guide for a more detailed description of the aci A components.

      The following is an example of an ACI that permits a user to write to their own password and mobile phone attributes.

       

      SampleACI

       

      You cannot read the ACI from left to right, or even right to left, you simply have to dive right in and look for the information required to understand the intent of the ACI.  If you have been working with ACIs for some time, you probably already have your own process, but I read/interpret the preceding ACI as follows:

      This ACI “allows” a user to “write” to their own (“ldap:///self”) userPassword and mobile attributes “(targetattr=”userPassword||mobile”)

      If you place this ACI on a particular user’s object (i.e. uid=bnelson, ou=people,dc=example,dc=com), then this ACI would only apply to this object.  If you place this ACI on a container of multiple user objects (i.e. ou=people,dc=example,dc=com), then this ACI would apply to all user objects included in this container.

       

      Access Control Processing:

       

      Access control instructions provide fine-grained control over what a given user or group member is authorized to do within the directory server.

      When a directory-enabled client tries to perform an operation on any entry in the server, an access control list (ACL) is created for that particular entry. The ACL for any given entry consists of the entry being accessed as well as any parent entries all the way up to the root entry.

       

      ACIDIT

       

      The ACL is essentially the summation of all acis defined for the target(s) being accessed plus the acis for all parent entries all the way to the top of the tree.  Included in this list are any Global ACIs that may have been configured in the cn=config as well.  While not entirely mathematically accurate, the following formula provides an insight into how the ACL is generated.

       

      Summation

       

      Using the previous formula, the access control lists for each entry in the directory information tree would be as follows:

       

       

      Once the ACL is created, the list is then processed to determine if the client is allowed to perform the operation or not.  ACLs are processed as follows:

      1. If there exists at least one explicit DENY rule that prevents a user from performing the requested action (i.e. deny(write)), then the user is denied.
      2. If there exists at least one explicit ALLOW rule that allows a user to perform the requested action (i.e. allow(write)), then the user is allowed (as long as there are no other DENY rules preventing this).
      3. If there are neither DENY nor ALLOW rules defined for the requested action, then the user is denied. This is referred to as the implicit deny.

      Something to Think About…

      smiley_confused Thought 1:  If in the absence of any access control instructions, the default is to deny access, then what is the purpose of access control instructions you might ask?  ACIs with ALLOW rules are used to grant a user permission to perform some action.  Without ALLOW ACIs, all actions are denied (due to the implicit deny rule).

      Thought 2:  If the default is to implicitly deny a user, then what is the purpose of DENY rules?  DENY rules are used to revoke a previously granted permission.  For instance, suppose that you create an ALLOW rule for the Help Desk Admin group to access a user’s PII data in order to help determine the user’s identity for a password reset.  But you have a recently hired Help Desk Admin that has not completed the required sensitivity training.  You may elect to keep him in the Help Desk Admin group for other reasons, but revoke his ability to read users’ PII data until his training has been completed.Note:  You should use DENY rules sparingly.  If you are creating too many DENY rules you should question how you have created your ALLOW rules.

      Thought 3:  If the absence of access control instructions means that everyone is denied, then how can we manage OpenDJ in the event that conflicting ACIs are introduced?  Or worse, ACIs are dropped altogether?  That is where the OpenDJ Super User and OpenDJ privileges come in.

       

      OpenDJ’s Super User:

       

      The RootDN user (“cn=Directory Manager” by default) is a special administrative user that can pretty much perform any action in OpenDJ.  This user account is permitted full access to directory server data and can perform almost any action in the directory service, itself.  Essentially, this account is similar to the root or Administrator accounts on UNIX and Windows systems, respectively.

      If you look in the directory server you will find that there are no access control instruction granting the RootDN this unrestricted access; but there are however privileges that do so.

       

      Privileges:

       

      While access control instructions restrict access to directory data through LDAP operations, privileges define administrative tasks that may be performed by users within OpenDJ. Assignment of privileges to users (either directly or through groups) effectively allows those users the ability to perform the administrative tasks defined by those privileges.

      The following table provides a list of common privileges and their relationship to the RootDN user.

       

      DefaultACIs

       

      The RootDN user is assigned these privileges by default and similar to Global ACIs, these privileges are defined and maintained in the OpenDJ configuration object.  The following is the default list of privileges associated with Root DN users (of which the Directory Manager account is a member).

       

      dn: cn=Root DNs,cn=config

      objectClass: ds-cfg-root-dn

      objectClass: top

      ds-cfg-default-root-privilege-name: bypass-lockdown

      ds-cfg-default-root-privilege-name: bypass-acl

      ds-cfg-default-root-privilege-name: modify-acl

      ds-cfg-default-root-privilege-name: config-read

      ds-cfg-default-root-privilege-name: config-write

      ds-cfg-default-root-privilege-name: ldif-import

      ds-cfg-default-root-privilege-name: ldif-export

      ds-cfg-default-root-privilege-name: backend-backup

      ds-cfg-default-root-privilege-name: backend-restore

      ds-cfg-default-root-privilege-name: server-lockdown

      ds-cfg-default-root-privilege-name: server-shutdown

      ds-cfg-default-root-privilege-name: server-restart

      ds-cfg-default-root-privilege-name: disconnect-client

      ds-cfg-default-root-privilege-name: cancel-request

      ds-cfg-default-root-privilege-name: password-reset

      ds-cfg-default-root-privilege-name: update-schema

      ds-cfg-default-root-privilege-name: privilege-change

      ds-cfg-default-root-privilege-name: unindexed-search

      ds-cfg-default-root-privilege-name: subentry-write

      cn: Root DNs

       

      This list can retrieved using the OpenDJ dsconfig command:

       

      ./dsconfig –h localhost –p 4444 –D “cn=directory manager” –w password get-root-dn-prop

       

      with the ldapsearch command:

       

      ./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” -b “cn=config” -s sub “cn=Root DNs” ds-cfg-default-root-privilege-name

       

      or simply by opening the config.ldif file and locating the entry for the “cn=Root DNs” entry.

      Most operations involving sensitive or administrative data require that a user has both the appropriate privilege(s) as well as certain access control instructions.  This allows you to configure authorization at a fine grained level – such as managing access control or resetting passwords.

      Privileges are assigned to users and apply globally to the directory service.  Any user can be granted or denied any privilege and by default only the RootDN users are assigned a default set of privileges.

      Note:  Consider creating different types of administrative groups in OpenDJ and assign the privileges and ACIs to those groups to define what a group member is allowed to do.  Adding users to that group then automatically grants those users the rights defined in the group and conversely, removing them from the group drops those privileges (unless they are granted through another group).

       

      Effective Rights:

       

      Once you set up a number of ACIs, you may find it difficult to understand how the resulting access control list is processed and ultimately the rights that a particular user may have.  Fortunately OpenDJ provides a method of evaluating the effective rights that a subject has on a given target.

      You can use the ldapsearch command to determine the effective rights that a user has on one or more attributes on one or more entries.

      $ ldapsearch –h localhost –p 1389 -D “cn=Directory Manager” -w password
      -g “dn:uid=helpdeskadmin,ou=administrators, dc=example,dc=com” -b “uid=scarter,ou=people, dc=example,dc=com” -s base ‘(objectclass=*)’ ‘*’ aclrights

      The preceding search is being performed by the Root DN user (“cn=Directory Manager”).  It is passing the –g option requesting the get effective rights control (to which the Directory Manager has the appropriate access configured). The command wants to determine what rights the Help Desk Administrator (uid=helpdeskadmin,…) has on Sam Carter’s entry (uid=scarter,…).  The scope of the search has been limited only to Sam Carter’s entry using the base parameter.  Finally, the search operation is returning not only the attributes, but the effective rights (aclrights) as well.

      Possible results from a search operation such as this are as follows:

       

      dn: uid=scarter,ou=People,dc=example,dc=com

      objectClass: person

      objectClass: top

      uid: scarter

      userPassword: {SSHA}iMgzz9mFA6qYtkhS0Z7bhQRnv2Ic8efqpctKDQ==

      givenName: Sam

      cn: Sam Carter

      sn: Carter

      mail: sam.carter@example.com

      aclRights;attributeLevel;objectclass: search:1,read:1,compare:1,write:0,selfwrit

      e_add:0,selfwrite_delete:0,proxy:0

      aclRights;attributeLevel;uid: search:1,read:1,compare:1,write:0,selfwrite_add:0,

      selfwrite_delete:0,proxy:0

      aclRights;attributeLevel;userpassword: search:0,read:0,compare:0,write:1,selfwri

      te_add:0,selfwrite_delete:0,proxy:0

      aclRights;attributeLevel;givenname: search:1,read:1,compare:1,write:0,selfwrite_

      add:0,selfwrite_delete:0,proxy:0

      aclRights;attributeLevel;cn: search:1,read:1,compare:1,write:0,selfwrite_add:0,s

      elfwrite_delete:0,proxy:0

      aclRights;attributeLevel;sn: search:1,read:1,compare:1,write:0,selfwrite_add:0,s

      elfwrite_delete:0,proxy:0

      aclRights;attributeLevel;mail: search:1,read:1,compare:1,write:0,selfwrite_add:0

      ,selfwrite_delete:0,proxy:0

      aclRights;entryLevel: add:0,delete:0,read:1,write:0,proxy:0

      The search results contain not only the attributes/attribute values associated with Sam Carter’s object, but the effective rights that the Help Desk Admins have on those attributes.  For instance,

      aclRights;attributeLevel;givenname: search:1,read:1,compare:1,write:0,selfwrite_

      add:0,selfwrite_delete:0,proxy:0

       

      The aclRights;attributeLevel;givenname notation indicate that this line includes the effective rights for the givenname attribute.  Individual permissions are listed that demonstrate the rights that the Help Desk Administrator has on this attribute for Sam Carter’s entry (1 = allowed and 0 = denied).

       

      Recommendations:

       

      An OpenDJ installation includes a set of default (Global) access control instructions which by some standards may be considered insecure.  For instance, there are five ACIs that allow an anonymous user the ability to read certain controls, extended operations, operational attributes, schema attributes, and user attributes.  The basic premise behind this is that ForgeRock wanted to provide an easy out-of-the-box evaluation of the product while at the same time providing a path forward for securing the product.  It is intended that OpenDJ should be hardened in order to meet a company’s security policies and in fact, one task that is typically performed before placing OpenDJ in production is to limit anonymous access.  There are two ways you can perform this:

      1. Enable the reject-unauthenticated-request property using the dsconfig command.
      2. Update the Global ACIs

      Mark Craig provides a nice blog posting on how to turn off anonymous access using the dsconfig command.  You can find that blog here.  The other option is to simply change the reference in the Global ACIs from ldap:///anyone to ldap:///all.  This prevents anonymous users from gaining access to this information.

      Note:  Use of ldap:///anyone in an ACI includes both authenticated and anonymous users – essentially, anyone.  Changing this to ldap:///all restricts the subject to all authenticated users.

      The following comments from Ludo Poitou (ForgeRock’s OpenDJ Product Manager) should be considered before simply removing anonymous access.

      You don’t want to remove the ACI rules for Anonymous access, you want to change it from granting access to anyone (ldap:///anyone) to granting access to all authenticated users (ldap:///all).

      This said, there are some differences between fully rejecting unauthenticated requests and using ACI to control access. The former will block all access including the attempts to discover the server’s capabilities by reading the RootDSE. The later allows you to control which parts can be accessed anonymously, and which shouldn’t.

      There’s been a lot of fuss around allowing anonymous access to a directory service. Some people are saying that features and naming context discovery is a threat to security, allowing malicious users to understand what the server contains and what security mechanisms are available and therefore not available. At the same time, it is important for generic purpose applications to understand how they can or must use the directory service before they actually authenticate to it.

      Fortunately, OpenDJ has mechanisms that allow administrators to configure the directory services according to their security constraints, using either a simple flag to reject all unauthenticated requests, or by using ACIs.

      A few other things to consider when configuring access control in OpenDJ include the following:

      1. Limit the number of Root DN user accounts

      You should have one Root DN account and it should not be shared with multiple administrators.  Doing so makes it nearly impossible to determine the identity of the person who performed a configuration change or operation in OpenDJ.  Instead, make the password complex and store it in a password vault.

      1. Create a delegated administration environment

      Now that you have limited the number of Root DN accounts, you need to create groups to allow users administrative rights in OpenDJ.  Users would then log in as themselves and perform operations against the directory server using their own account.  The tasks associated with this are as follows:

      1. Associate privileges and ACIs to users for fine grained access control

      Now that you have create administrative groups, you are ultimately going to need to provide certain users with more rights than others.  You can create additional administrative groups, but what if you only need one user to have these rights.  Creating a group of one may or may not be advisable and may actually lead to group explosion (where you end up with more groups than you actually have users).  Instead, consider associating privileges to a particular user and then create ACIs based on that user.


      GluuOSCON 2015 Access Management Workshop [Technorati links]

      March 19, 2015 05:39 PM

      OSCON_LOGO

      Description

      Centralizing authentication and access management can enable your domain to more quickly adapt to changing security requirements. This workshop will provide an overview of the Gluu Server, including the architecture, installation process, and configuration. The workshop will show howto centrally control access to API’s for a web or native client using the OpenID Connect and UMA profiles of OAuth2.

      Abstract

      There are significant security advantages to a domain offering centralized authentication and authorization APIs. If developers hard code security, updates to policies require re-building applications and sometimes regression testing. This can make it hard for an organization to quickly change policies to respond to threats.

      Since the late 90’s, enterprise Identity and Access Management (“IAM”) suites have been available from large vendors like Oracle, CA, IBM and RSA. While individual FOSS components exist, assembling an equivalent IAM stack was difficult. In 2009, the Gluu Server set out to change this.

      This workshop will guide the attendee on how to deploy a Gluu Server, using either the Centos or Ubuntu packages. It will also include a tutorial on how to configure applications to leverage the central authentication and authorization infrastructure. No programming is required for the basics—most of the examples will involve only linux system administration—but some advanced use cases will be presented to demonstrate how a programmer could call the API’s from a sample Python application. Also included will be an overview of authentication API’s, including SAML, OAuth2 and LDAP.

      The goal is to de-mystify the art of Identity and Access management, and provide the attendee with the tools they need to improve application security at their organization.