August 29, 2015

Mike Jones - MicrosoftProof-of-Possession Key Semantics for JWTs spec addressing remaining comments [Technorati links]

August 29, 2015 12:54 AM

OAuth logoProof-of-Possession Key Semantics for JWTs draft -04 addresses the remaining working group comments received – both a few leftover WGLC comments and comments received during IETF 93 in Prague. The changes were:

The updated specification is available at:

An HTML formatted version is also available at:

August 27, 2015

CourionLasting Effects and How to Avoid a Data Breach [Technorati links]

August 27, 2015 12:47 PM

Access Risk Management Blog | Courion

Target, AshleyMadison, and the IRS all made news this week for being hacked and information being stolen. The difference between these? 2 years. The lesson? A hack to your system may happen over a few seconds or a few months but the effects can linger for years on your brand reputation and your bottom line. Today I want to talk about the real and lasting effects of a data breach and what it could mean for your organization.

Brand Reputation: Confession. I am a fan of Target. As in, I will drive out of my way to go there over another store that might be closer to my house,and I’m ok with knowing that I might pay more there because I believe in their quality and customer service. However,Target even I was worried when they announced the massive hack in 2013. Was I shopping at Target on that date? Please re-read the first line and make your own assumptions there. But was I hacked? And how would I know? Would Target tell me that my information was stolen and out in the open for anyone to see, use, and exploit? I was worried and, I'll admit, it took me a while to go back.

Did I go back? Of course I did, and so did millions of other customers. However, their brand reputation suffered in the short term with even avid fans like myself backing away and it continues to suffer in the long term. We all saw that Target’s brand reputation dropped dramatically after the hack. However, what you may not have seen is that every time a major hack happens, most likely, Target is mentioned. Imagine what it does to their brand reputation each time by reminding customers what happened. If you’re imagining more dips than peaks; you’re right. Just this week, Target settled with Visa with a $67 Million claim. Another reminder means another dip in the graph. Source: Huffington Post

Bottom line:  Recently the FBI apprehended a group of hackers that were using press releases to get inside trading information. When banks are hacked, often they watch the money they control go into another account, another bank, another country that they can't calculator scientific mediumget back. But what about when the hackers aren't targeting your money? What about when they go for your seemingly innocuous information?

The possibilities are still endless and are just as damaging to your bottom line. We mentioned that the decline in brand reputation causes decreased sales/business for the organization but what about the other costs to your organization? Such as:

Cost of Settlement:

As mentioned earlier, Target just settled with Visa for a cool $67 Million resulting from the 2013 hack. That was two years ago and they are still paying for the breach. Oh and they still haven’t settled with MasterCard. More to come on that I’m sure.

Cost of Fines:

Are you a hospital? Then you have even more rules and regulations to worry about. If HIPAA deems you non-compliant then you are at risk for a fine. Recently a Mass. Hospital was fined $218,000 for being non-compliant. Probably not something they planned in the yearly budget.

Cost of Monitoring/Customer Support:

Home Depot, another major retailer, another massive hack. However, when Home Depot announced to its consumers that they could be in danger, they offered to pay for one year of credit monitoring to make sure they were protected. While this did a great deal of damage control, it cost them dearly. 

Looking for ways to mitigate these effects? Our infographic below includes suggestions from our own security executives. If you want to know know, contact us at info@courion.com or leave a comment below. 

 

Tips for Securing Your Network

blog.courion.com

August 26, 2015

CourionHow to Secure Your Medical Devices from Cyber Attacks [Technorati links]

August 26, 2015 12:24 PM

Access Risk Management Blog | Courion

The past week has been bad news for drug pumps. The FDA issued its first warning about them, and a video has been making its way around the blogosphere showing a drug pump hack. While these issues have been spotlighted this week, they are not the only devices at risk. In the past year we have seen the rise of Electronic Health Record (EHR) systems flourish along with the ease of housing them on mobile devices which we all know have a not so solid record when it comes to being breached. 

Mobile Medical

So the question is, why are we still using these devices if we know they are so vulnerable? Simply put, the same reason we allow smart thermostats and refrigerators in our home - convenience.

Drug pumps are easily accessed by nurses and doctors who can give doses to patients from the nurse’s station rather than having to walk to their room. Medical records can be pulled up on a tablet in radiology and billing at the same time without having to manually walk them from one place to the other. These are all highly convenient and keep the costs, not to mention the time spent by each employee, to a minimum.

Medical devices are convenient and are improving the way we do business and the way we treat our patients. They aren't going anywhere, so rather than look to replace them, we need to learn how to secure them.

Differentiated Networks: Just like you keep your valuables out of reach of your three year old, you have to keep your devices out of reach of the public. This week in his blog, Dr. John Halamka expounded on this topic, and it was so simple and so logical it’s no wonder it often gets overlooked. He suggested setting up three different networks:

                - Public:  This Wi-Fi network would be accessible by patients and families and would be open and free. While you would put up firewalls and ensure some measure of security you would not need to monitor this system as you wouldn't be sharing any data over it.

                - Private: This network would be for employees only. While it would be more secure it, would still be an open network, accessible to anyone with a password and would need to be monitored and governed. Only approved and secure messaging should be used on any device when sharing medical information, even if it is directly with a patient.

                 In the most recent Spok survey on BYOD devices, it is noted that—on average—48% of mobile devices used in hospitals are personal and not issued by the organization. With such a high percentage, your BYOD policies and security policies should be even higher to keep the risk of network penetration at a minimum.

             
               - Device-Only:  This network would not be hooked to any other systems or personal devices and would have no access to the outside internet. The only access to this network would be through a key provided by the security team or through an authorized device.


Firewalls:
Build a gate and dig a moat. You need to make sure that you have a firewall in place to catch anything that is coming in or going out on any of your networks.  While no one 

Medical Servers

has ever laid down their weapons when approaching a gate, they do have to try a lot harder and you want to put every barrier possible in their way.

Provisioning: You're a hospital administrator with 400 nurses, 200 doctors, and another 500 people making up your maintenance, billing, support, and other staff. Quick: what access does Bob Smith, RN need?  Ok that was a hard one, because we don't know what area he works in. What about Sally in HR? Do you know what access she has? What she actually needs?

Hospitals are huge organizations and between the thousands of employees, both full-time and contract, and just like each patient needs a different diagnosis they all need different access to get their job done. With a proper provisioning tool you can automate access for specific roles, properly approve excess requests, and ensure that only the right people have the right access and that you aren't rubber stamping access to people who may not need what they ask for.

Culture of Security:  We all know the number one reason for security breaches: user error. The number one reason for this is lack of awareness. This might be one of the cheapest fixes you could ever have. All you need is education. Build a training program that goes into new employee onboarding to discuss the importance of security in your culture. Reinforce this

security team

with articles in your monthly newsletter or tips on how to protect yourself and your information. Improve your password policies and make sure that everyone is changing them on a frequent basis so that the chance of being hacked is reduced. Lastly, build an incident response plan. Make sure that everyone knows what to do, or at least knows where to find the plan, when something goes wrong.

Benjamin Franklin stated that an ounce of preparation is worth a pound of cure. It's time to create a wellness plan to take care of our security systems just like we take care of our patients. Set yourself and your organization up for success with plans, policies, and solutions to keep your medical devices, records, and employees safe.

blog.courion.com

Vittorio Bertocci - MicrosoftAugmenting the set of incoming claims with the OpenID Connect and OAuth2 middleware in Katana 3.x [Technorati links]

August 26, 2015 08:55 AM

image

Here there’s another (very) frequently asked question. I have the eerie sensation that I have already blogged about it, but a quick search did not yield any result post-WIF…. so here you go.
Say that I have a web app or a web API secured with Azure AD (or any other provider, really). Say that in my app I maintain attributes about my user, and I would find it handy to have such attributes exposed in form of claims, alongside the ones I receive from the trusted authority at authentication (nee token validation) time. How do I make it happen with Katana 3.x, OWIN’s implementation in ASP.NET4.6?

OpenId Connect

Easy. Let’s start with OpenID Connect (OIDC for brevity). The OIDC middleware graciously offer notifications at key stages of the validation pipeline. The last of those, SecurityTokenValidated, offers you the chance to modify the ClaimsIdentity obtained from the incoming token. Here there’s an example, where “RetrieveHairLenght” is a hypothetical function that queries by local DB for the desired attribute.

SecurityTokenValidated = (context) =>
{
    string userID = context.AuthenticationTicket.Identity.FindFirst(ClaimTypes.NameIdentifier).Value;
    Claim userHair =
      new Claim(http://mycustomclaims/hairlenght,
                RetrieveHairLenght(userID),
                ClaimValueTypes.Double, "LOCAL AUTHORITY");
    context.AuthenticationTicket.Identity.AddClaim(userHair);
    return Task.FromResult(0);
},

Once you have added that to your Notifications property of the options you initialize the OIDC middleware with, you’ll be able to read that claim from anywhere your app – just like any other “official” claim:

var userHair = ClaimsPrincipal.Current.FindFirst(http://mycustomclaims/hairlenght);

 

Preeeety neat. Note that this happens at token reception time, right before establishing the session. That means that whatever I/O you performed for retrieving your extra attributes will be done only once, which is good; it also means that the resulting custom claims will end up in your session cookie… and if you add too much stuff, the effects might not be good: performance hits, cookie clipping if you exceed the browser limits, and so on. Keep all those considerations in mind as you plan your augmentation strategy.

Web API

Now, say that you want to do the same for a web API.
If you are on ASP.NET 5, good news! You do exactly like the above (module the ClaimsPrincipal.Current part, topic for another post).

If you are on ASP.NET4.6 and Katana, that is a bit trickier. The web API middleware in Katana3.x does not have a very rich notifications pipeline. However you still have a mechanism for injecting your custom claims, it’s just a bit different. Given that this is a tad more exotic than just filling up a notification

app.UseWindowsAzureActiveDirectoryBearerAuthentication(
    new WindowsAzureActiveDirectoryBearerAuthenticationOptions
    {
        Audience = ConfigurationManager.AppSettings["ida:Audience"],
        Tenant = ConfigurationManager.AppSettings["ida:Tenant"],
        Provider = new OAuthBearerAuthenticationProvider()
        {
            OnValidateIdentity = async context =>
            {
                context.Ticket.Identity.AddClaim(
                   new Claim(http://mycustomclaims/hairlenght, 
                                   RetrieveHairLenght(userID),                
                                   ClaimValueTypes.Double, 
                                   "LOCAL AUTHORITY");));
            }
        }
    });

OnValidateIdentity gives you a last chance of modifying the ClaimsIdentity before it gets passed to the app, in analogy to what you have seen for OIDC.

Note that in this case there is no cookie to remember the authority-issued claims and your custom attributes – the nature of the web API is that you’ll get the token at every. single. call.
You’ll probably be well served by some inmemory caching strategy, so that the call to RetrieveHairLenght does not have to query slow persistent storage all the time.

Short and sweet, especially because it’s 2:00am here Smile have fun with your custom claims!

August 25, 2015

CourionNation State Cyberattacks & Sound as a Password- It's Another #TechTuesday [Technorati links]

August 25, 2015 01:02 PM

Access Risk Management Blog | Courion

blog.courion.com

Julian BondThese are the ones you should pay attention to. [Technorati links]

August 25, 2015 08:44 AM
These are the ones you should pay attention to.
http://fusionanomaly.net/EntitiesNode.html
 fUSION Anomaly. Entities »
Entities

[from: Google+ Posts]

Julian BondSome last.fm data [Technorati links]

August 25, 2015 06:54 AM
Not sure if this will ever get updated given the problems with last.fm. It used to be on my profile but I'm no longer allowed html and only 200 chars. And the remote oauth at http://lastfm.dontdrinkandroot.net/ no longer works. I'm not sure which end that is caused by but I bet it's last.fm.





August 24, 2015

GluuBuild a multi-cloud authentication service with DDOS protection in a few hours [Technorati links]

August 24, 2015 05:11 PM

the_twin_towers__1st_view__by_edu_vas_95-d6bxul5

The massive amounts of computer and telecommunications infrastructure that were destroyed on September 11th, 2001 changed our perspective on the importance of building robust systems that could enable continuous operations, uninterrupted by a disaster in one physical location. And while the dangers from a natural disaster or act of terrorism are still with us today, businesses face a new peril–the possibility that hackers may launch a distributed denial of service (DDOS) attack.

One of the most important web services to protect is your authentication system. If people can’t be digitally identified–whether it’s your customers, partners, or employees–they won’t be able to utilize any of your mobile or Web applications.

Building a resilient authentication system that can withstand network outages and DDOS attacks used to be quite a challenge. Organizations had to lease data center space and deploy complicated and expensive network infrastructure. However, the “cloud” has changed all this. What used to take months or years can now be accomplished in hours! Sound impossible? Yet this is exactly what we did using the latest Gluu Server cluster packages and an innovative service provider called DOSarrest. Here’s an overview of our proof-of-concept:

  1. Created a cloud server on Digital Ocean with Ubuntu 14.04 (dosdo.gluu.org) 15 minutes
  2. Created a cloud server on Rackspace with Ubuntu 14.04 (dosrs.gluu.org) 15 minutes
  3. Installed the Gluu cluster master packages on dosdo.gluu.org 15 minutes
  4. Installed the Gluu cluster consumer packages on dosrs.gluu.org 15 minutes
  5. Used the Cluster web interface to create a new cluster and deployed ldap, oxauth, and oxtrust nodes on both dosdo.gluu.org and dosrs.gluu.org 60 minutes
  6. Updated our DNS to point at the IP address provided by DOSarrest 15 minutes
  7. Setup a web server with mod_auth_oidc, pointing at the cluster IP address. Not strictly required, but we needed a sample Web application to test. 30 minutes
  8. Tested with a round robin load balancer configuration to make sure that the Gluu Server was stateless, and either dosdo.gluu.org or dosrs.gluu.org could handle the authentication requests.15 minutes

Total time: 3 hours

With two servers deployed on different cloud providers, we had our two data centers. In the past this could have been a very complex task. However the Gluu Server leverages Docker and an open source networking package called Weave to manage how the Gluu services can securely communicate with each other.

And of course, if there was a DDOS attack against our authentication service… DOSarrest, the A-team of DDOS protection, has our back!

Not bad for a few hours work!

For more information about the Gluu Server, schedule a meeting with us!

Radiant LogicFrom Join to Context for Profile Management: Mapping the Nurturing Process at the Database Level [Technorati links]

August 24, 2015 04:58 PM

In my last blogpost (was it really more than two months ago—I plead CEO-overload!), we looked at how SQL and data integration are essential to the development of a truly useful customer profile. At the end, I promised to step through the process of nurturing relationships, where we guide prospects and customers through each stage, sharing and collecting information in a step-wise cadence. So here goes—and note that I’m using the vocabulary and categorizations from Salesforce, one of the main customer relationship management apps on the market:

  1. First, a set of information is collected from an interested party—also known as a lead—and further information is sent to match the needs of that lead.
  2. After that, the lead is qualified as a prospect, and the sales rep conducts further qualification discussions to move that prospect to the next stage of the pipeline.
  3. At this point, enough information is known on the needs of the prospect to determine if an opportunity for a sale exists. If yes, the sales rep takes the final qualification step by negotiating the terms of a deal.
  4. When (and if) a deal is struck, that opportunity becomes a customer.

What we can see in this nurturing process, as in most business processes or complex transactions, is that the whole operation is built around a series of steps, or a business workflow. At each step, specific information is gathered and you move to the next steps only when the information requirement of the current step is fulfilled, as we see below:

Navigate

What I am describing here is obvious at the business level—or “conceptual level” in the parlance of the data-modeling world. However, when it comes to the details of low-level implementation at the data structure or database level, things are not so cleanly delineated and as a result, currently deployed solutions are far from optimal. So let’s revisit this pattern as it applies to the integration of a user profile at the level of SQL.

Digging Deeper Into the Process of Building a User Profile

Let’s suppose that we have four sets of information—AKA tables in SQL—about a user with relationships that can be direct (such as how the blue table with the person icon connects with the green or orange table in the diagram below) or indirect (like the blue and violet tables). For the sake of simplicity, we’ll say that these 4 four tables are hosted within the same database system (in real life, you might very well have this information spread across several different data silos, adding a ton more complexity to the task of building a user profile).

To get a complete profile of a customer, you would glue together those tables by joining them via a garden variety “inner join based on equality,” which is the most frequently used style of join.

Inner Join Based On Equality

And at this stage, most implementers would be happy—but are we really delivering the profile that was defined in the specification? Not even close. What we’re getting is a list aggregated in a table—basically a hodgepodge of all paths linking the initial info about the person (the blue icon) to the rest of the profile information (orange, green, violet).

What we really need is a complete profile segmented by the different steps of our workflow, one that’s organized by the different contexts of our nurturing process. In this specific data organization, we can easily distinguish who is still in the lead stage, and who is a prospect, opportunity, or customer. This graph—or hierarchy—gives us a complete, contextual picture of our pipeline, narrowing from a very large lead base to a smaller subset of customers, as we see below.

Contextual Picture

Fulfilling this requirement for clarity and context enables us to imagine an architecture that would deliver a better solution to our quest for a complete profile. Don’t worry—we’ll dig deep into this architecture in my next post.

Data Architecture

SHARE
facebooktwittergoogle_pluslinkedinmail

The post From Join to Context for Profile Management: Mapping the Nurturing Process at the Database Level appeared first on Radiant Logic, Inc

Mark Dixon - OracleThe Power of PowerPoint [Technorati links]

August 24, 2015 03:31 PM

How many PowerPoint slides have you presented?  How many statistics have you used (or abused)?

Marketoonist 150824

KatasoftSingle Sign-on for Java in 20 Minutes with Spring Boot and Heroku [Technorati links]

August 24, 2015 03:00 PM

I love how Java keeps reinventing itself to stay current and relevant (I can hear all my Node.js and Ruby friends groaning). The ecosystem that supports Java is keeping pace with new developments as well. Today, it’s as easy today to build, test and deploy a rich Java web app as quickly as in Python or Node.js (more groans).

One piece of that is Spring Boot, which makes building and launching a Java webapp in minutes a reality. Heroku’s focus on Java support also speeds things along.

Finally, Stormpath means developers don’t have to build authentication and authorization workflows. Stormpath’s identity API and single sign-on functionality (via IDSite) provide out-of-the-box account registration, login, email workflows and single sign-on across applications. These flows include default forms and views, all of which are customizable.

In this post, we will put all that together and get the added bonus of Single Signon across your applications – all within 20 minutes.

Read on – tick tock!

Here are the prerequisites you need for this tutorial:

Note: You can just as easily use Maven. The source code that goes with this post includes a pom.xml, if that’s your prefered build tool.

To make it super easy, we’ve added a handy Heroku deploy button to each example, so you can see it in action right away. If this takes you more than 20 minutes, please let us know what held you up in the comments. We love feedback.

Launch Spring Boot – 5 Minute Tutorial

Note: If you are already well versed in the world of Spring Boot, you may want to jump to the next section. There – I just saved you 5 minutes. Your welcome.

This section uses the SpringBootBasic tag in the github repository.

Deploy

Spring Boot enables you to fire up a fully functioning Java web application just like you would start a simple Java application. It has a main method and everything. For instance, the @SpringBootlApplication annotation does everything that the @Configuration, @EnableAutoConfiguration and @ComponentScan annotations (with their default attributes) do in a vanilla Spring application.

What makes Spring Boot work so well and so easily are Starter packages that add in functionality, including default configuration. The Stormpath Spring Boot Thymeleaf Starter we will use further on bakes in all of the Stormpath functionality for creating new users, logging in and changing passwords. All you do is reference a single jar in your build.gradle or pom.xml file.

For our basic example, we are going to include the core Spring Boot Starter Web and the Thymeleaf Spring Boot Starter. Thymeleaf is a modern HTML 5 Java templating engine.

Here’s our build.gradle:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.2.5.RELEASE")
    }
}

apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'spring-boot'

group = 'com.stormpath'
version = '0.1.0'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
    mavenCentral()
}

dependencies {
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-web', version:'1.2.5.RELEASE'
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-thymeleaf', version:'1.2.5.RELEASE'
}

There are three more files we need to get our basic Spring Boot app going.

IDSiteDemoApplication.java is the application’s entry point:

package com.stormpath.idsite_demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class IDSiteDemoApplication {
    public static void main(String[] args) {
        SpringApplication.run(IDSiteDemoApplication.class, args);
    }
}

The @SpringBootApplication annotation sets up all the configuration necessary to launch the application.

HomeController.java maps a URI and resolves to a Thymeleaf template:

package com.stormpath.idsite_demo.controllers;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class HomeController {
    @RequestMapping("/")
    public String home() {
        return "home";
    }
}

The @Controller and @RequestMapping annotations set this class up as a controller and configure it to handle requests at the / URI. Simply returning the String home hooks into the Thymeleaf template architecture which leads us to our final file:

home.html located in the templates folder is the template that will be rendered when browsing to /:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
    <head>
        <th:block th:include="fragments/head :: head"/>
    </head>
    <body>
        <div class="container-fluid">
            <div class="row">
                <div class="box col-md-6 col-md-offset-3">
                    <div class="stormpath-header">
                        <img src="https://stormpath.com/images/template/logo-nav.png"/>
                    </div>
                    <h1>Hello!</h1>
                </div>
            </div>
        </div>
    </body>
</html>

Note: You may notice the th:include directive in the template above. This is part of the Thymeleaf architecture for including files in other files. The full source code for the example has the templates/fragments/head.html file.

Alrighty, then. Let’s round out this first 5 minutes by firing up this most basic of Spring Boot apps.

gradle clean build will do the trick. Then: java -jar build/libs/idsite_demo-0.1.0.jar

build it

Add Stormpath for SpringBoot Authentication

This section uses the SpringBootStormpath tag in the github repository.

Deploy

In this section, we’ll:

Create a Stormpath Account

Go to the Stormpath Registration page. Enter your First and Last names, company, email and password.

register

Click Signup.

Click the link in the verification email you receive. Then, you will see the tenant name that’s been generated for you.

login

Login. Done.

Note: For more information on multi-tenant applications, we have a handy blog post on it.

Generate a Stormpath API Key Pair

Once you log in to your Stormpath account, you will see this screen:

dashboard

Click the Create API Key button.

api key

Click the Create API Key button and save the file.

The API Keys stored in that file are used to authenticate your application to Stormpath. In the file, there’s an apiKey.id and a apiKey.secret. You would never want the apiKey.secret exposed. So, for instance, you would never want to have the api keys file checked into a git repository. When we deploy to Heroku later on, I will show you how to configure your app to use the api keys without having to have them in the git respository.

Stormpath uses well documented configuration defaults to make working with our APIs super easy. One of these defaults is the api key file location. The Java SDK will automatically look for the file in your home directory:

~/.stormpath/apiKey.properties

If you copy the file you downloaded to that path, no additional configuration is required to connect to Stormpath from your application.

Add an Application to Your Stormpath Account

Back on the admin dashboard, click the Applications tab.

applications

You will notice that there are two applications already present: My Application and Stormpath. They were setup automatically when you registered for Stormpath. Without any other Stormpath applications defined, no further configuration is needed for your Spring Boot application. By default, it will connect to the My Application instance already defined.

However, the ultimate goal here is to get some Single Signon goodness and in order to do that, we’ll need more than one application to sign in to.

So, let’s create another Stormpath application. Click the Create Application button.

new_application

Let’s break down the options here.

Name and (optional) description are self explanatory. And, it makes sense that we want this application Enabled.

By default, the Create new Directory checkbox is checked. For our example, I’ve unchecked this option. Rather, I’ve checked the Map Account Stores to this Application checkbox and chosen the My Application Directory. Finally, I’ve clicked the DEFAULT ACCOUNT LOCATION and DEFAULT GROUP LOCATION radio buttons.

So, what’s going on here? The way that Stormpath is organized, an application can use any number of directories as its Account Stores. A Stormpath directory is just a bucket that contains accounts and groups. For our purposes, we can use the directory that was automatically created for us when we registered called My Application Directory. In the bonus section below, I will show you how to create a specific type of directory to add Google authentication to your app. Spoiler alert: It’s super easy.

Update Your Spring Boot Webapp

Let’s hook up our basic Spring Boot app to Stormpath to show some Stormpath app information. This will lay the foundation for being able to integrate with the ID Site service.

Take a look at our HomeController:

package com.stormpath.idsite_demo.controllers;

@Controller
public class HomeController {
    @Autowired
    Application app;

    @RequestMapping("/")
    public String home(Model model) {
        model.addAttribute("appName", app.getName());
        model.addAttribute("appDescription", app.getDescription());

        return "home";
    }
}

We’ve now taken advantage of Spring’s @Autowired capability to give us a handle to the Stormpath Application object. Using that, we set the Application’s name and description in the Model object which will be passed on to our template.

This brings us to our next change, the home.html Thymeleaf template:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
    <head>
        <th:block th:include="fragments/head :: head"/>
    </head>
    <body>
        <div class="container-fluid">
            <div class="row">
                <div class="box col-md-6 col-md-offset-3">
                    <div class="stormpath-header">
                        <img src="https://stormpath.com/images/template/logo-nav.png"/>
                    </div>
                    <h1 th:inline="text">Hello! Welcome to App: [[${appName}]]</h1>

                    <h3 th:inline="text">[[${appDescription}]]</h3>
                </div>
            </div>
        </div>
    </body>
</html>

Using the Thymeleaf notation to pull information out of the model, we are referencing [[${appName}]] and [[${appDescription}]].

Finally, we’ll make a small (but powerful) update to out build.gradle file. We are changing this line:

compile group: 'org.springframework.boot', name: 'spring-boot-starter-thymeleaf', version:'1.2.5.RELEASE'

to this:

compile group: 'com.stormpath.spring', name: 'spring-boot-starter-stormpath-thymeleaf', version:'1.0.RC4.5'

We’ve swapped out Spring’s Thymleaf Spring Boot Starter for Stormpath’s. Here’s the cool bit: everything needed to interact with the Stormpath Java SDK is included in this Starter.

There’s a total of 7 lines that have changed in our application files and one file, application.properties that we’ve added in order to start hooking in to Stormpath.

Build Your Java Web Application

One extra bit of information we will need here is the URL to the Stormpath Application you created.

You can find this from the admin dashboard by navigating to your Application.

application id

Assuming that you put your api key file in the default location of ~/.stormpath/apiKey.properties, this is all you need to do run this example:


gradle clean build
STORMPATH_APPLICATION_HREF=https://api.stormpath.com/v1/applications/6bHOGj63WM8cfC2nhD3Pki \
  java -jar build/libs/idsite_demo-0.1.0.jar

Of course, you would put your own STORMPATH_APPLICATION_HREF in.

app info

You can see that the page in the browser is now displaying the information from the Stormpath Application that we created.

Stormpath Single Sign-On with IDSite…

…you guessed it. In Five Minutes.

This section uses the SpringBootStormpathIDSite tag in the github repository.

Deploy

You may have had the experience of adding authentication and authorization to your applications. Maybe you did it upfront. Maybe it was something you said you’d get to – eventually. Either way it’s a pain. And, it has nothing to do with the problem you are trying to solve. It is critical and necessary, though.

In this section, we are going to add the ability to create new users, login, restrict access to a page to only those users that are logged in and change your password. And, we are going to do it with minimal coding and minimal configuration.

ID Site Configuration

First, we’ll setup IDSite from the admin dashboard. Click the ID Site tab.

id site

As you scroll around, you will notice that there are a number of fields with the label Upgrade Required. The basic ID Site functionality can be used with our free tier, as we will see momentarily. Having a custom domain or customizing the templates used for authentication requires a paid subscription.

Here, we are simply going to update two fields and save the settings.

id site

For security, you must specify a list of URLs that are allowed to make connections to your ID Site.

Enter http://localhost:8080 in the Authorized Javascript Origin URLs field.

For the security reasons, you must specify a list of authorized redirect URLs.

Enter http://localhost:8080/restricted/id_site_callback and, on a separate line, http://localhost:8080/in the Authorized Redirect URLs field.

Click the Save button. That’s all that’s necessary to configure your ID Site to enable authentication and authorization in your app.

Let’s take a step back and use a precious 30 seconds of our 5 minutes to look at the mechanism behind ID Site.

When a user attempts to access a restricted area of your website, they will be redirected to your ID Site, IF they do not already have an active session.

They will be presented with a familiar login form complete with options to create a new user and to reset their password.

id site login

Where did this come from? Is it magic? It’s part of what you get for using ID Site – all the authentication and authorization flows that you usually write on your own. Poorly. (ouch – that was a little harsh. But, seriously – how often do you read about security breaches due to poorly implemented auth code?)

Once authenticated, they will be redirected back to the URL you specify and will be able to access that restricted content.

This process will seem utterly familiar to your users – even mundane. And you will have accomplished it with very little configuration or coding.

Update Your Spring Boot Webapp

We are going to add 50 lines of code in a new controller – total – to hook into ID Site. We will also add a new template that is restricted to people that have logged in to your application and update our home template.

Let’s take a look at that controller, Restricted Controller.

package com.stormpath.idsite_demo.controllers;

@Controller
public class RestrictedController {
    @Autowired
    Application app;

    private static final String ID_SITE_CALLBACK = "/restricted/id_site_callback";

    private String getBaseURL(HttpServletRequest request) {
        String url = request.getRequestURL().toString();
        String uri = request.getRequestURI();
        return url.substring(0, url.length() - uri.length());
    }

    @RequestMapping("/restricted/secret")
    public void idSiteStep1(HttpServletRequest request, HttpServletResponse response) {
        IdSiteUrlBuilder idSiteBuilder = app.newIdSiteUrlBuilder();
        idSiteBuilder.setCallbackUri(getBaseURL(request) + ID_SITE_CALLBACK);

        response.setStatus(HttpServletResponse.SC_FOUND);
        response.setHeader("Cache-control", "no-cache, no-store");
        response.setHeader("Pragma", "no-cache");
        response.setHeader("Expires", "-1");
        response.setHeader("Location", idSiteBuilder.build());
    }

    @RequestMapping(ID_SITE_CALLBACK)
    public String idSiteStep2(HttpServletRequest request, Model model) {
        AccountResult accountResult = app.newIdSiteCallbackHandler(request).getAccountResult();
        Account account = accountResult.getAccount();

        model.addAttribute("firstName", account.getGivenName());

        return "restricted/secret";
    }

    @RequestMapping("/logout")
    public void logout(HttpServletRequest request, HttpServletResponse response) {
        IdSiteUrlBuilder idSiteBuilder = app.newIdSiteUrlBuilder();
        idSiteBuilder.setCallbackUri(getBaseURL(request) + "/");
        idSiteBuilder.forLogout();

        response.setStatus(HttpServletResponse.SC_FOUND);
        response.setHeader("Cache-control", "no-cache, no-store");
        response.setHeader("Pragma", "no-cache");
        response.setHeader("Expires", "-1");
        response.setHeader("Location", idSiteBuilder.build());
    }
}

Let’s break this down method by method

getBaseURL

This private method take in an HttpServletRequest object and returns the the base of the full url pulled out of it.

If http://localhost:8080/restricted/secret is the URL, http://localhost:8080 will be returned.

idSiteStep1

This method is bound to the request path /restricted/secret. This kicks off the ID Site interaction resulting in the hosted login form for the user.

In order to be redirected properly to the ID Site, a special URL needs to be built. Fortunately, you don’t have to worry about those details. The IdSiteUrlBuilder object manages all of that for you. All you need to do is to set the callback URL in this case. When your IdSiteUrlBuilder object is all set, calling the build method returns the correct URL string to redirect to. All of the response lines in the idSiteStep1 method are preparing the response to redirect to your ID Site.

idSiteStep2

This method is bound to the request path /restricted/id_site_callback, which is what we programmed in for the setCallbackUrl method in the previous step. This is the glue tht causes ID Site to redirect back to your web app after authenticating. We are using the AccountResult object to pull the Account object out and get at the givenName, which we then pass along to the view template found at restricted/secret.

logout

The final method in this controller is logout. It creates a logout URL using the IdSiteUrlBuilder object. the forLogout method tells the builder to create a logout url. The callback that is set brings us back to the front door of the app.

Let’s take a look at the new template, restricted/secret.html:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
    <head>
        <title>Hello World!</title>
        <th:block th:include="fragments/head :: head"/>
    </head>
    <body>
    <div class="container-fluid">
        <div class="row">
            <div class="box col-md-6 col-md-offset-3">
                <div class="stormpath-header">
                    <img src="https://stormpath.com/images/template/logo-nav.png"/>
                </div>

                <h1 th:inline="text">Hey, [[${firstName}]]</h1>
                <h3>The secret is the Andromeda Galaxy is going to collide with the Milky Way Galaxy in 4.5 billion years.</h3>
                <h4>Better start packing!</h4>
                <form th:action="@{/logout}" method="post">
                    <input class="btn btn-danger" type="submit" value="Sign Out"/>
                    <a class="btn btn-success" href="https://stormpath.com/">Go Home</a>
                </form>
            </div>
        </div>
    </div>

    </body>
</html>

There are two interesting lines here, from the perspective of interacting with ID Site.

<h1 th:inline="text">Hey, [[${firstName}]]</h1>

This line accesses the firstName variable that we retrieved from the Account in the controller and set in the model.

<form th:action="@{/logout}" method="post">

This form sets the action to hit our logout method in the controller and tells it to use an HTTP POST to do it.

Finally, we are adding a single line in to our home.html template that kicks off the whole login flow:

<a class="btn btn-success" href="https://stormpath.com/restricted/secret">Click here for a secret message.</a>

Remember, /restricted/secret will be picked up by the idSiteStep1 method and will redirect us to your ID Site login form.

Fire Up Your Webapp and Try It Out

Start up the app as before:


gradle clean build
STORMPATH_APPLICATION_HREF=https://api.stormpath.com/v1/applications/6bHOGj63WM8cfC2nhD3Pki \
  java -jar build/libs/idsite_demo-0.1.0.jar

Since we don’t yet have any users defined in our Stormpath directory, let’s create a new user and then make sure we can log in and log out as that user.

first, browse to the front door: http://localhost:8080

restricted home

Click the friendly green button.

id site login

Click the Create an Account link.

create account

Click the friendly green button.

restricted

Huzzah! We’re in!

If you click the green button now, you will be brought back to the home page. If you then click the green button on the home page, you will go directly to the restricted page. You will not see the login form again. This is because you have established a valid session.

logged in

If you click the red button, you will be logged out and redirected to the home page. Clicking the green button brings you to the login form once again as you have trashed your session.

You may notice that after we created our account, we were immediately logged in and sent to the restricted page. You can slow this down by requiring email verification in your Stormpath admin console as part of the account creation process.

Note: There is a known issue whereby you cannot be logged into the Stormpath Admin Dashboard and authenticate using ID Site in the same session. We are working on resolving this issue ASAP. It would never affect your users as they would never be in your Stormpath Admin Dashboard. For now, use a separate browser profile or separate browser instance when using the Stormpath Admin Dashbaord.

Single Sign-On with Heroku in 5 Minutes

This section uses the SpringBootStormpathIDSite tag in the github repository.

Deploy

Note: You can use the Heroku Deploy button above to deploy two different Heroku Apps if you want to test out SSO without deploying yourself.

Phew! Home stretch! So, what’s this SSO I keep hearing so much about? With the foundation we’ve built, we are now in a position to deploy multiple instances of this web app to Heroku. So What? I’ll tell you “So What!”

While it’s a novelty that we can deploy multiple instances of the web app, what really gives it power is ID Site’s Single Sign-On capability. By the end of this section you will see that by logging in to one instance of the webapp, you can browse to the restricted page of another instance of the web app without having to log in again.

First, we need to add a file so Heroku knows how to launch our app. It’s a one-liner called Procfile:

web: java $JAVA_OPTS -Dserver.port=$PORT -jar target/*.jar

Notice the bash style variable: $PORT. This is automatically populated by Heroku and does not need to be explicitly set by us.

Let’s setup and deploy one Heroku app and make sure it all works.

heroku apps:create idsite-demo-app1 --remote idsite-demo-app1

Notice the --remote on the end of the command. Heroku automatically adds a git remote to your local repository in order to be able to deploy your app. By default, this remote will be named heroku. Since we will be deploying multiple instances of the app, we want different remote names.

Now that we’ve created the app, we need to set some config parameters. This is part of the secret sauce that allows us to deploy the same codebase, but link the web app to different Stormpath Applications.

heroku config:set \
  STORMPATH_API_KEY_ID=<your api key id> \
  STORMPATH_API_KEY_SECRET=<your api key secret> \
  STORMPATH_APPLICATION_HREF=<your app href> \
--app idsite-demo-app1

Remember I said earlier one of the benefits of how Stormpath configures itself is that you are not required to embed sensitive api key information in your code? Here’s where it all comes together. In the above command, we are setting environment variables for our Heroku instance. The Stormpath SDK automatically checks for the presense of STORMPATH_API_KEY_ID, STORMPATH_API_KEY_SECRET and STORMPATH_APPLICATION_HREF environment variables. If present, the SDK will automatically use the values in those environment variables when interacting with the API. It’s what connects our Spring Boot web app to the right Stormpath Application.

Ok. The stage is set. Let’s deploy our app!

git push idsite-demo-app1 master

This generates a ton of output, but let’s look at some of the highlights:

remote: Compressing source files... done.        
remote: Building source:        

...

remote:        [INFO]                                                                                 
remote:        [INFO] ------------------------------------------------------------------------        
remote:        [INFO] Building demo 0.0.1-SNAPSHOT        
remote:        [INFO] ------------------------------------------------------------------------        

...

remote:        [INFO] Installing /tmp/build_a7299c4194f003c6e3730e568a540e82/target/demo-0.0.1-SNAPSHOT.jar to /app/tmp/cache/.m2/repository/com/stormpath/idsite_demo/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.jar        

remote:        [INFO] ------------------------------------------------------------------------        
remote:        [INFO] BUILD SUCCESS        
remote:        [INFO] ------------------------------------------------------------------------        

...

remote: -----> Discovering process types        
remote:        Procfile declares types -> web        
remote: 
remote: -----> Compressing... done, 63.6MB        
remote: -----> Launching... done, v6        
remote:        https://idsite-demo-app1.herokuapp.com/ deployed to Heroku        
remote: 
remote: Verifying deploy.... done.        
To https://git.heroku.com/idsite-demo-app1.git
 * [new branch]      master -> master

Toward the bottom, Heroku is discovering the process type based on our Procfile. In this case, it’s web.

Last bit of housekeeping for our first app is to configure ID Site to accept connections from it and to redirect to it. Jump back over to your admin dashboard for ID Site and add http://idsite-demo-app1.herokuapp.com to the list of Authorized Javascript Origin URLs and add http://idsite-demo-app1.herokuapp.com/ and http://idsite-demo-app1.herokuapp.com/restricted/id_site_callback to the list of Authorized Redirect URLs.

id site

Make sure you click the Save button at the bottom of the screen.

And, http://idsite-demo-app1.herokuapp.com/ is ready to go! Check it out. Create an account. Log in and log out. Have fun with it.

We’ve now arrived at the gates of the SSO promised land. Here’s all that’s left to do:

We are just rinsing and repeating what we did before.

Let’s go create our new Stormpath Application:

new application

Notice that we are mapping the same Account Store for this new application.

Time to create a new Heroku app:

heroku apps:create idsite-demo-app2 --remote idsite-demo-app2

And, configure it:

heroku config:set \
  STORMPATH_API_KEY_ID=<your api key id> \
  STORMPATH_API_KEY_SECRET=<your api key secret> \
  STORMPATH_APPLICATION_HREF=<your app href> \
--app idsite-demo-app2

Make sure you use the full URL of the newly created Stormpath Application.

Deploy time:

git push idsite-demo-app2 master

Finally, ID Site URLs update:

id site

You can now check the box on your ToDo list that says: build and deploy an SSO application. You’ve done it!

id site

You can log in to http://idsite-demo-app1.herokuapp.com. Then, you can jump directly over to http://idsite-demo-app2.herokuapp.com/restricted/secret and you will not have to login again!

Happy SSOing!

In this post, you’ve created a Spring Boot web app that takes enables Single Sign-on with Stormpath’s ID Site service. Stormpath hosts the login form and all the other links and form associated with creating a new user and resetting your password.

With a small amount of code in one Controller, you can authenticate and authorize users for your app. And deployed it quickly with Heroku. I’d love to hear about your experience in working with the examples in this post.

If you’re interested in using more features of Stormpath in Spring Boot, here’s our Spring Boot Webapp Sample Quickstart

Feel free to drop a line over to email or to me personally anytime.

Like what you see? to keep up with the latest releases.

Julian BondEditing old and long abandoned Flash SWF code. [Technorati links]

August 24, 2015 01:57 PM
Editing old and long abandoned Flash SWF code.
TL;DR. I need a lazyweb recipe or help for swf->decompile->edit->recompile->swf

Today's trip down the computing rabbit hole is all about Flash and old code. I was a user of a bit of clever code called Tuneglue that allowed you to wander round and visualise the links between music artists. You put in one artist, hit expand and it would query last.fm for similar artists and then build a rubber band mesh of the links. It was a great way of exploring musical artist space. The people who wrote it disappeared, leaving a ghost web site behind[1]. The people who hosted it[2] were bought by EMI who then killed the web server. We found the page on the internet archive[3] and amazingly the Flash code still worked. So I grabbed a copy and put it on my website[4]. All went well till about 10 days ago. Then last.fm[5] went live with their beta and killed the V1 of their API used by the flash code[7]. The example data[8] and V1 is really not that different from V2[9].

So I thought, maybe I can decompile the flash .swf file, make a few changes to support v2 or the last.fm API and then recompile it. I found an online site that will decompile swf[10] Deep in the code, the call to last.fm and the xml parsing looks pretty simple.
  Xml.load(("http://ws.audioscrobbler.com/1.0/artist/" + UrlEncode(this.Artist)) + "/similar.xml");
  while (E < EMIArtists.length) {
    EMIArray.push(Xml.childNodes[0].childNodes[E].childNodes[0].childNodes[0].nodeValue);
    etc

This doesn't look hard. There's only half a dozen lines that need changing to support last.fm API v2 and I think I can puzzle out the syntax and make it work. So then I started looking for tools to do the swf->decompile->recompile->swf round trip.

And that's when I fell down the rabbit hole into other decompilers[11], IDEs, numerous support environments (Java! Ugh!), confusion about what language I was looking at, missing project files, huge downloads that wouldn't install, install files that the anti-virus took 10 minutes to decide were ok, support forums populated by idiots, trial versions of software, abandoned open source projects, and so on and so on. Right now, I've just given up in disgust.

So, dearest Lazyweb. Is there anyone out there who's ever successfully done swf->decompile->recompile->swf and can provide a recipe? Or even better is there another music obsessive who wants to take a stab at doing it?

Always assuming that last.fm don't just resurrect the API V1. They're looking hugely incompetent at the moment so I'm not holding out a lot of hope. 

http://voidstar.com/tuneglue/

[1]http://www.onyro.com/
[2]http://audiomap.tuneglue.net
[3]https://web.archive.org/web/20140328020033/http://audiomap.tuneglue.net/
[4]http://voidstar.com/tuneglue/
[5]http://www.last.fm/
[6]https://getsatisfaction.com/lastfm/topics/will-v1-0-of-the-api-come-back-online
[7]https://web.archive.org/web/20060911141306/http://www.audioscrobbler.net/data/webservices/#Artist Data
[8]https://web.archive.org/web/20061231223630/http://ws.audioscrobbler.com/1.0/artist/Metallica/similar.xml
[9]http://ws.audioscrobbler.com/2.0/?method=artist.getsimilar&artist=metallica&api_key=d50ed5584be64a1564a5d1a12e3fef7f
[10]http://www.showmycode.com/
[11]https://www.free-decompiler.com/flash/
[from: Google+ Posts]
August 21, 2015

Mike Jones - Microsoft“amr” values “rba” and “sc” [Technorati links]

August 21, 2015 09:21 PM

OAuth logoAuthentication Method Reference Values draft -02 changed the identifier for risk-based authentication from “risk” to “rba”, by popular acclaim, and added the identifier “sc” (smart card).

The specification is available at:

An HTML formatted version is also available at:

Kantara InitiativeSpotlight on Kantara Accredited Assessor Electrosoft [Technorati links]

August 21, 2015 06:29 PM

In this edition of Spotlight, we are pleased to tell readers more about                   electrosoft_logo

Electrosoft, one of the first Kantara Accredited Assessors.

 

1. Why was your service/product created, and how is it providing real-world value today? 

Electrosoft was launched in 2001 with a passionate group of information assurance experts propelled by the desire to make a difference and elevate the state of the information security within the Federal community. Our first customer was the National Institute of Standards and Technology (NIST), where we pioneered some of the earliest Federal government standards and guidelines related to the issuance and authentication of digital identities. Next, the Department of Defense (DoD) engaged us to identify and vet new and innovative methods for rapid online verification of public key infrastructure (PKI) certificates – this work enabled the later expansion of X.509 digital certificate use across the DoD enterprise. During this period, our innovation and high quality work got the attention of several other customers including the Department of Health and Human Services (HHS), Veterans Benefit Administration (VBA), Drug Enforcement Administration (DEA) and Treasury, who engaged us to address critical challenges they faced in the areas of FISMA compliance, incident response, configuration management, strong authentication and digital signatures. Thus, the first few years of our corporate trajectory spanned projects in the information security space that were at the leading edge of innovation.

Today, Electrosoft is a strong and highly-acclaimed Economically Disadvantaged Woman-Owned Small Business (EDWOSB) serving a diversified set of customers and delivering a comprehensive set of technology-based solutions and services. As an established Federal prime contractor, we employ mature management practices to work with our many subcontractors (large and small) to deliver the right solutions to our customers on time and within budget. We consistently achieve Very Good or Exceptional ratings on our CPARS (Contractor Performance Assessment Reporting System) evaluations and past performance questionnaires. We have received numerous customer kudos for our outstanding support as well as various prestigious business awards.

We are proud to serve our customers and deliver the most effective technology-based solutions to propel each of them to success in their mission and goals. We have a proven track-record as a leader, facilitator and problem solver that can tackle the toughest challenges in our customer community through teamwork, dedication and a drive to provide optimal solutions to meet our customers’ needs. Our knowledge, experience and corporate culture set us apart from our competition – let us show you what a difference we can make to the success of your mission!

2. Where is your organization envisioned to be strategically in the next 5-10 years?

Electrosoft is on a strong path for growth in the next 5-10 years.  We’ve built our reputation on helping Federal Government and commercial customers with broad missions solve difficult problems.  We will continue to expand our solutions and services in the cyber, information assurance, and identity management areas, for the federal and commercial customers we currently serve.  In addition, we are rapidly building a strong portfolio of service offerings including systems development, testing. IV&V, staffing and support, and training and offering these to new customers in the federal and commercial spaces.  We will continue to develop our workforce and attract top national talent as we grow into new markets and new mission and business areas.

3. Why did you join Kantara Initiative?

Electrosoft has been at the forefront of credential reuse and federation solutions as well as assessment services. In Kantara, Electrosoft saw an opportunity to support and join an early trust framework provider that could offer credential assurance services for non-PKI credentials. As an author of early policy documents related to non-PKI online authentication, such as  NIST SP 800-63 Electronic Authentication Guideline, Electrosoft has lead and closely followed the online authentication sector. Electrosoft believes strongly that for federation of credentials to succeed, more than just technical solutions are needed. A trust framework service, such as the one Kantara offers is critical to growing the marketplace.

4. What else should we know about your organization, the service/product, or even your own experiences?

Electrosoft offers numerous whitepapers and blogs that are freely made available on a wide variety of information security subjects. Please feel free to review our whitepapers (http://www.electrosoft-inc.com/papers/) and blogs postings (http://www.electrosoft-inc.com/electroblog/) to learn from our staff. If you have any questions related to our postings, you’ll find the authors willing and eager to engage in an exchange of ideas.

 

Julian BondIs there an RSS/Atom feed for Google Photos? And preferably one that's as easy to use and consume as... [Technorati links]

August 21, 2015 01:05 PM
Is there an RSS/Atom feed for Google Photos? And preferably one that's as easy to use and consume as Flickr's that has content that contains the html to show the photo imgs.

And if not, why not?

eg. 
https://www.flickr.com/services/feeds/photos_public.gne?id=83642842@N00〈=en-us&format=atom
 www.flickr.com/services/feeds/photos_public.gne?id=83642842@N00〈=en-us&format=atom »

[from: Google+ Posts]

Vittorio Bertocci - MicrosoftOpenId Connect Web Sign On with ADFS in Windows Server 2016 TP3 [Technorati links]

August 21, 2015 07:50 AM

I can’t tell you how excited I am to finally write this post Smile

Yesterday we released the Technical Preview of Windows Server 2016. Yes, it supports containers natively, awesome and groundbreaking,  yadda yadda yadda… but if you follow this blog, I know what you look for every time a new Windows Server comes out: if there are news in ADFS, isn’t it?
Boy, does this release deliver on that. ADFS in Windows Server 2016 TP3 comes with brand new support for OpenId Connect web sign on and for OAuth2 confidential clients – moreover, it makes it easy to manage all that through its MMC. No more fiddling with Powershell… unless you are a Powershell wizard, in which case – carry on, good sir/madam. Smile

The ADFS team is going to deliver docs that will cover all the new functionality, but I was too excited to wait – and I suspect you will be in the same boat, too. So I quickly set up a VM, fiddled with the new ADFS app configuration features and adapted some of our Azure AD sample code to do OpenId Connect based sign in against ADFS. That was all surprisingly fast… and easy. The blog below documents that journey. It barely scratches the surface of what’s possible with the new ADFS, but I hope it will show you enough to entice you to try it yourself.

Setting up ADFS in Windows Server 2016 TP3

As you guys know, my administrative skills are nil. Every single time I need something administrative done, I hit Lync – err… Skype for Business I mean – and beg Sam or Dean Wells for their help. But this time I could not wait, hence I decided to try my luck and do it myself. Turns out, it wasn’t that hard.

First thing, I downloaded the ISO of WS2013 TP3 from here. All it took was a quick & painless sign in and registration with my MSA. The file is 4.8 GB.

That done, I set up a new VM in hyper-V. Note, this is all done on my Surface Pro 3 with Windows 10 Pro. I gave the machine 1GB of RAM (although Windows Server only required 512Mb for running, the VM setup will fail with just that. You can reduce it later) and made sure that I had a decent amount of free space on my disk. Turns out that I didn’t delete my old Windows.old folder – between that and other housekeeping I ended up with 40Gb free, which were enough to set things up. The VHDX after all the setups is 10.4Gb, hence very reasonable. As network, I assigned to the VM the “Surface Ethernet Adapter Virtual Switch” – something about the physicality of an ethernet connection makes me feel good. Also remember, when it comes to admin I don’t know what I am doing.

During the VM creation, I specified the downloaded ISO as the setup disk. The booted the VM, and witnessed the usual magic of doing a Windows installation entirely from one SSD – super fast. During the Windows setup I have chosen the “with desktop” variant, which for some mysterious reason is not the default. I guess that working on a no-GUI environment satisfies some self-image requirements for admins… J/K, of course Smile

Anyhow, shortly after I was welcomed by the awesome Windows 10 theme. Ta-dahhh.

image

If you set up ADFS in the past, you know what’s next. I had to create an AD, which in my thrift setup meant promoting the VM to be a domain controller. That done, I had to configure the ADFS service.

First thing, I went to the server manager/local server tab, scrolled down to roles & features, and under “tasks” I selected “Add Roles and Features”.

image

Here, I selected AD DS and ADFS – as shown below. Note: IIS is not in the default roles, not I am adding it here. Why? Because ADFS in WS2016 does not need it – it’s all self hosted!

image

That done, I hit next until Install was the only option – and chose it. After some progress bar fun, I landed on the screen below.

image

Very helpful! I was able to trigger the next step, “promote this server to a domain controller”, just by clicking on the corresponding link.

image

I won’t bore you with all the details – also because I did that this morning and I am writing this blog few hours into the evening, hence I simply don’t remember. The main thing I’ll point out is that I chose the “add new forest” given that I want a brand new AD, and that I turned off the DNS service because I don’t need it. Ah, and when you are asked to create a new password – make sure you remember what you choose. It will be required later.

Once you have done all that, the DCpromo will take place… and will take some time, requiring a long reboot. Not as long as it was in the past, I gather, but not instantaneous nonetheless.

Once you come back in, you’ll be reminded in various places that you didn’t set up ADFS yet – just click on any of those and you’ll end up in the configuration wizard. Unfortunately I didn’t take too many screenshots of the process, I was too impatient to get to the apps part Smile but if I figured this out, I am sure you can figure this out too… also remember, this is a super early post. The official documentation will follow soon.

Anyhow. Before starting the ADFS configuration in earnest, there’s one task that one has to do: get a certificate that will be used for transport security and signature purposes. Traditional walkthroughs suggest to get a self signed certificate via IIS, but per the above I didn’t have it installed here and did not want to install it just for spitting out an X509. Luckily, I found a super handy way of doing that via Powershell! It’s all thanks to this article from my good friend Orin. I opened a Powershell prompt and entered the following:

New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname WS2016TP3.vibrodomain.net

That command created a selfsigned certificate with a subject of my choosing, of the form <machinename>.<domain>, and saved it on the local machine cert store. The command output provided me with the certificate thumbprint.

Next, I exported the certificate to file – just in case. The next 2 cmdlets did that for me: note that I used the thumbprint for identifying the certificate.

$pwd = ConvertTo-SecureString -String "whatevah" -Force –AsPlainText
Export-PfxCertificate -cert cert:\localMachine\my\1C14CE8E9077970CE27D2ED58154ED6B7F768401 -FilePath c:\WS2016TP3.vibrodomain.net.pfx -Password $pwd

That done, I went back to the ADFS setup wizard. I am sure you’ll find your way through it, but here there are few hints:

Once you are done, you should land on the screen below. The warnings below do sound a bit scary, but in the end they did not interfere with setting up the application.

image

We are almost done with the setup! I did just another couple of things.

First, I created one test user in the directory – I don’t like to use the domain admin user directly, given that it has all sorts of odd constraints that are there for excellent reasons in prod but are in the way while developing.

Second, I had to set things up so that I could see ADFS from my host machine, the Surface. To that end:

Setting up a Web App for OpenId Connect sign in ADFS

NOW we get to the fun part. Go back to the VM and open the ADFS management console.

image

There are lots of interesting news you can find if you drill on each of the folders on the left, but here we’ll concentrate on the most obvious new entry – the Application Groups folder. That’s the place where ADFS stores application settings. Select it and choose “Add application group”. You’ll see the wizard below.

image

The idea is self-explanatory: you are offered a set of application or application topologies templates, covering the gamut of moving parts you encounter in modern authentication. Select some of the entries randomly and you’ll see the wizard steps adapt to the task, growing and shrinking.

Here I am going to the simplest scenario, a web application – hence I pick up the “server application or Website”, fill up name and description and click next.

image

Looks familiar? : The next steps tells you upfront what clientid is being assigned to your app, and asks you to supply the redirect_uri to use for sending tokens. I plan to use our Azure AD samples or a VS2015 ASP.NET template for actually implement the app, hence I already know what URL to use – it’s the usual https://localhost:44300/ for VS2015 or https://localhost:44320/ for the sample. I can add both, just in case! Once done, I click next.

image

Here I can select which credentials to assign to my app – big news, given that ADFS didn’t support confidential clients until now. I don’t plan to use any flow requiring creds in this post, but I am adding it anyway for the LOLZ. I chose the “generate a shared secret”, corresponding to the string key we use in almost all Azure AD samples. Note that if you don’t write that down NOW, it’s lost forever and you’ll have to reset it… just like its cloud counterpart.

Also note, super interesting: ADFS can use windows integrated auth as a credential for confidential clients. That makes total sense… and it’s awesome. Just think of the daemon scenarios this enables.

image

Finally, here there’s the summary. It took what – 40 seconds? Pretty awesome.

image

And we are done – I mean done done! I can almost hear you. If you have used ADFS in the past, you’re likely to blurt out “wait a minute, what about defining the relying party trust? What about claims mappings”? Have faith, my friends Smile none of that will be necessary.

Setting up an MVP App to Authenticate via OpenId Connect and ADFS

I am finally back on my territory. Setting up an app for talking OpenId Connect to Azure AD or ADFS is, surprise surprise, almost exactly the same operation. There are two quick ways of getting to the app we want. One is to use the VS2015 ASP.NET templates to create an app configured to connect to Azure AD, then modify it to talk to ADFS. The other is to clone one of the OpenId Connect samples for Azure AD, and modify it in the same way (the templates are modeled after the samples). Earlier today I did the templates approach, but for this post I’ll show you how to modify our WebApp-OpenIdConnect-DotNet sample.

Let’s start by cloning the sample. On Windows I like to use the GitHub desktop client, but of course you can use whatever you prefer.

image

Open the solution in whatever IDE you prefer. The changes are going to be pretty minimal. Here I’ll use VS2013, which just after few weeks of VS2015 already feels retro!

Once the solution is open, compile it – so that all the missing NuGets are restored. That done, head to the web.config and modify the ida: appSettings entries as follows:

<add key="ida:ClientId" value="8219ab4a-df10-4fbd-b95a-8b53c1d8669e" />
<add key="ida:ADFSDiscoveryDoc" value="https://ws2016tp3.vibrodomain.net/adfs/.well-known/openid-configuration" />
<!--<add key="ida:Tenant" value="[Enter tenant name, e.g. contoso.onmicrosoft.com]" />
<add key="ida:AADInstance" value="https://login.microsoftonline.com/{0}" />-->
<add key="ida:PostLogoutRedirectUri" value="https://localhost:44320/" />

 

In details, the modifications here are:

image

That done, we need to tweak the OpenId Connect middleware initialization logic. Head to App_Start/Startup.Auth.cs, and modify the string inits at the beginning of the file as shown below:

private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"];
//private static string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"];
//private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"];
private static string metadataAddress = ConfigurationManager.AppSettings["ida:ADFSDiscoveryDoc"];
private static string postLogoutRedirectUri = ConfigurationManager.AppSettings["ida:PostLogoutRedirectUri"];

 

No mysteries here, we are just reflecting the changes in the web.config here in code. That done, modify the OpenId Connect middleware options as in the following.

app.UseOpenIdConnectAuthentication(
    new OpenIdConnectAuthenticationOptions
    {
        ClientId = clientId,
        //Authority = authority,
        MetadataAddress = metadataAddress,
        RedirectUri = postLogoutRedirectUri,
        PostLogoutRedirectUri = postLogoutRedirectUri,

What changed:

That’s it! Hit F5.

Here there’s our good old sample UX.

image

Hit Sign in.

image

Ha! This is exciting! It’s the ADFS default page for forms auth – very similar to the Azure AD one. Here I am signing in as my test user.

image

And uneventfully, I am signed in! Note the classic domain\username identifier on the top right corner – the logic originally written for Azure AD worked just as well for ADFS.

If you are curious about the content of the default id_token you get from ADFS, you can inspect the incoming ClaimsPrincipal in the index controller (or the immediate window) by adding ClaimsPrincipal cp = ClaimsPrincipal.Current; (which comes from the System.Security.Claims namespace BTW). Screenshot below:

image

And just like that, you have a functioning MVC app authenticating your local AD users via OpenID Connect. Not bad for few mins’ work, you should probably ask for that raise you’ve been thinking about! Winking smile

Summary

Ignore the DCPromo and ADFS setup, which are going to be done by your admins (anyway, that were so straightforward that even I managed to do it in very little time and without help). The new ADFS in the Windows Server 2016 TP3 makes it very easy to provision applications, and its support for modern app topologies is finally comprehensive. The OWIN middleware in Katana / ASP.NET, already well proven in Azure AD scenarios, works as is  with ADFS –and the delta between the code required in the two cases is risible.

I am loving it.. and that’s only the beginning. Expect upcoming official docs to describe all the other options in depth – I am sure you will be as delighted and excited as I am right now. Huge kudos to the ADFS team for an awesome prerelease – I am sure they are looking forward for your feedback.

Go ahead, download Windows Server 2016 TP3, set up ADFS and… happy coding!

August 20, 2015

Kantara InitiativeWe Need Your Vote to Get to SXSW 2016 [Technorati links]

August 20, 2015 05:02 PM
Attribution: https://flic.kr/p/iRgkU3

Attribution: https://flic.kr/p/iRgkU3

We need your vote to help get us to SXSW!

This year we’re focusing on consent and particularly, the idea that clicking to “agree” is a broken and out dated paradigm for managing information sharing interactions. We believe this is a critical topic that addresses a power imbalance associated with how people interact with technology services today. Site visitors either agree or leave. Now, that doesn’t seem very interactive.

Constantly being asked for consent is a paradigm that does not translate in to the real world. In the real world it’s rare for some one to ask for your consent to perform an everyday transaction. Think of it – walk in to your local coffee shop and ask the barrista for your “usual” coffee order. The barrista will prepare your order, serve it, you will pay and be on your way. There was no need to ask for consent because cultural and societal norms enable this flow to happen with ease.

Now, look at the digital world. Simple acts, like looking up a recipe to cook rice for example, can generate many consent requests. Your browser might ask to use your location for example. This type of request for consent is odd and inconsistent with how we normally behave as humans in the real world.

These types of “agree” for consent requests happen all the time. Most people don’t know why they are being asked for consent and they don’t know the implications of agreeing (or not). People tend to click “agree” so that they can get access to the information they are looking for. Clicking “agree” becomes a door to access rather than a real understanding of agreement. People are effectively trained to click the “agree” button to move forward.

If people don’t “agree” they can leave the site. This flow represents low to no user interaction. One could argue that it does not help a customer to provide real consent and it also does not help a service to forge a relationship with their customer. This is a sub-optimal situation for all parties involved.

Kantara Initiative members are working to fix this issue through open standards and industry collaboration. They are working to find a better way to perform and track the action of digital consent for information sharing. One such solution is the Minimum Viable Consent Receipt that is being developed in the Consent and Information Sharing Work Group.

If you think this topic is critical toward enabling more interactive relationships between businesses and their customers and citizens and their governments please vote for our session to help us get to SXSW! We hope to see you there!

Read what Internet Society’s Robin Wilton has to say about this session.

Vote – http://panelpicker.sxsw.com/vote/48349

 

 

CourionIntelligent IAM for Risk Assessment [Technorati links]

August 20, 2015 12:44 PM

Access Risk Management Blog | Courion

Welcome to the last installment of our 3-part series exploring how intelligence improves identity and access management, or IAM. In part 1 we looked at how intelligence improves the provisioning portion of IAM. In part 2 we took a look at how intelligence improved the governance portion of IAM. In this segment we look beyond just provisioning and governance to address how intelligent IAM can help to reduce the top 5 most common elements of risk: identity, resources, rights, policy, and activity. 

1. Identity: In part 2 of our series, we discussed how human resources were the most dynamic risk facing security teams today. The reason behind this is that you are constantly managing changing identities. Who are you? What is your role? What do you need access to? These are questions constantly being asked by our system and can equate to hundreds or even thousands of access requests a year. 

describe the image
With intelligent IAM, all roles are built into the system along with the basic applications that they need access to. For example, when a marketing manager was hired, they would be led through the system to request access to their email account, marketing file share folder, and marketing automation software because those are typical of their role and inside their peer group. All requests that fall within the boundaries of their peer group they would be automatically approved for. However, if they wanted access to, say the sales folder, they would have to request special access. This solution gives the user guidelines rather than the all too common shopping cart approach where they are requesting items that they don’t really need and creating a backlog of requests while the approver decides if they really need that access.

2. Resources: With so many business applications, servers, mobile devices, etc. do you know which assets are critical and must be protected? Do you know which seemingly innocuous applications tie back to a server that needs to be protected?

Governance certifications exist to monitor access to the most sensitive information, applications, and servers. Intelligent IAM governance will not only monitor your most sensitive data, but will send up a flag, or an alert, when a high risk event takes place. When accounts are created outside of the provisioning system or high risk applications are granted outside of a role or peer group they will be flagged as a "critical risk". 


3. Rights: Who really needs access to what? Before intelligent IAM all provisioning and governance had to be audited to make sure that the right people had the right access to the right things. The issue was that those rights were always changing. Some applications are not as high risk and can be audited on an annual or semi-annual basis. However, there are other applications that are highly critical and must be assessed on a monthly or weekly basis. Doing this manually for all employees would be impossible. 207H

By using intelligence, your IAM system can review rights as needed and ask for re-certification for sensitive applications. For example: an email account can be automatically re-certified each month as long as the employee isn't terminated. However, the payroll system may need a monthly manual re-certification to make sure that only the right people have access.

4. Policy: What business rules must be enforced in your company? What segregation of duties do you rely on? This is another risk taken care of, somewhat automatically, by the assignment of roles within the organization. Segregation of Duties is an easy addition, especially when set initially. Managers should not be able to both post and approve their own time cards, nor should they be able to place and approve a purchase order. Governance certification and approvals as well as segregation of duty assignments will help to mitigate this risk rather easily.

time 273857 12805. Activity: Who is doing what? And when? Visibility into all of your applications and systems is an extremely difficult task and without an automated system is basically impossible. Much like with the alerts sent by your high risk resources, you can use intelligent IAM to see what your users are doing with real time monitoring and be alerted to any inconsistencies. This real time look into your system shows you what is happening with approvals as well as risk assessment and can take away the need for annual or semi-annual auditing. With an automated system you will be able to see sensitive updates monthly, weekly, or as needed instead of having to wait 6 to 12 months for an audit.


While the idea of an Identity and Analytics system is not new, we believe that the use of intelligence in IAM is revolutionizing the industry. With the use of real-time data and information backed automation systems, you are able to have visibility into your system at any time rather than waiting for an audit. Your decisions will be made based on the most accurate and up to date information.

Want to know more about how Intelligent Identity and Access Management can help you mitigate risk in your organization? Download our eBook, Improving Identity and Access with Intelligence, and learn about: 

- What is Intelligent IAM? 

- Intelligence for Provisioning

- Intelligence for Governance

- Intelligence for Risk 

- And More! 

         describe the image        


 
 

blog.courion.com

Julian BondIt's so sad to watch a web site that you've been using for more than 10 years screw up and slowly destroy... [Technorati links]

August 20, 2015 12:00 PM
It's so sad to watch a web site that you've been using for more than 10 years screw up and slowly destroy all the goodwill they've built up.

Come on Last.Fm, please don't fail us now.

http://www.last.fm/home

They've mistakenly gone live with a complete redesign when the beta clearly wasn't ready yet. And yet again, 10 years of communities and discussions have gone AWOL probably never to re-emerge. It also seems that most of the APIs are currently broken so there's a lot of developer goodwill lost as well.

Thanks, CBS.
 Last.fm - Listen to free music and watch videos with the largest music catalogue online »
The world’s largest online music catalogue, powered by your scrobbles. Free listening, videos, photos, stats, charts, biographies and concerts.

[from: Google+ Posts]
August 19, 2015

Julian BondThe Causal Angel (Jean le Flambeur) by Hannu Rajaniemi [Technorati links]

August 19, 2015 05:43 PM

[from: Librarything]

Julian BondThe Annihilation Score (A Laundry Files Novel) by Charles Stross [Technorati links]

August 19, 2015 05:41 PM

[from: Librarything]

Julian BondHieroglyph: Stories and Visions for a Better Future by Ed Finn [Technorati links]

August 19, 2015 05:41 PM

[from: Librarything]

Julian BondSeveneves: A Novel by Neal Stephenson [Technorati links]

August 19, 2015 05:41 PM

[from: Librarything]

Nat Sakimura「大人のプライバシー」:不倫サイトのヌード写真や性的妄想もふくむ顧客情報とクレジットカード情報が公開 [Technorati links]

August 19, 2015 12:05 PM
Ashley Madison Site Image

(Source) Ashley Maddison 

米国時間8月18日の夕方、不倫専門出会い系サイト「アシュレイ・マディソン」[1]から盗まれた3,700万人分の、ヌード写真や性的妄想もふくむ顧客情報とクレジットカード情報が公開されたそうです。データの量は約10ギガバイト。通常の検索エンジンなどでひっかからない「ダークウェブ」への公開です[2]。

クレジットカードは止めれば良いですが、この「不倫希望者」というラベルの付いた名簿は、現在の世界では、それ自体のほうがプライバシー・インパクトが高いですね。また、その経済的な価値もかなりのものになりそうです。ゆすりに使えますからね。きっと、裏社会の人々が既に暗躍を始めているでしょうね。

こうした被害をなくするのに一番良いのは、「不倫?それが何か?」「そんなデータ、だれも興味無いよ。」というふうに、人々が「大人」になることですね。これを称して、JICS2013か何かのクロージング・パネルだったかなでは、私は「大人のプライバシーが今求められている。」とお話したことが有ります。

これから、どんどんデータは漏れだしていくのです。それは不可避でしょう。そのプライバシーインパクトは何故生じるか?それは、そのデータをプライバシー・インパクトがあるように使うからです。もし、そのデータが落ちていても、皆が見なかったことにすればプライバシー・インパクトは無くなるのです。

現在の、「データの漏洩=実プライバシー・インパクト」な状況は、使わなければ良い物を、みなが使ってしまうからですね。まるで、がきんちょ。そう、現状は「がきんちょプライバシー」な状況なのです。男女関係を追いかけるマスコミなんかもそうですよね。恋人同士を囃し立てる小学生みたいなもんです。

IoT時代、プライバシーデータの完全な制御も機密の確保もできなくなるでしょう。1999年1月に、サン・マイクロシステムズ社の社長だったスコット・マクニーリーが「ゼロ・プライバシー」という言葉を使いました。「もはやプライバシーなんて無いんだよ(ゼロ・プライバシー)。乗り越えて行けよ。」[3] 彼はこの言葉で袋叩きになったわけですが、今なら皆さんもこのこの言葉を噛みしめることができるのではないでしょうか。今風に言い直すなら、

もはや、だれも何もかも秘密にしておけるなんてことは無いんだよ。大人になれよ。

「他の人がきっと秘密にしておきたいと思ってるだろうなという情報に接したら、見なかったことにしてそっとしておけよ」[4]ということです。そう、これが「プライバシーを尊重する (privacy respecting)」ということ、すなわち「大人のプライバシー」なんですね。

少しでもはやく、人々が「大人」になることを祈念しております。

そうでないと、IoT時代、シッチャカメッチャカになりますよ!

 

[1] Ashley Madison: https://www.ashleymadison.com/

[2] Gizmode:『全米が泣いた。不倫サイトの顧客情報、本当にネット上に暴露される』, CNET:『不倫サイトAshley Madisonの会員情報、ついにネットで公開か–ハッカーらが声明』など、たくさん報道されています。

[3] Scott McNealy: “You have zero privacy. Get over it.” (1999/1) from Wired:”Sun on Privacy: ‘Get Over It'” (1999/1/26)

[4] 「プライバシーの権利は、そっとしておいて貰う権利(Right to be let alone)」だ。Warren & Brandeis の論文[5]の、プライバシーの権利を定義した有名な言葉ですね。味わい深い言葉です。

[5] Warren, S.D.,  Brandeis, L.D.:“The Right to Privacy” (1890), Harvard Law Review. Vol. IV    December 15, 1890  No. 5

August 18, 2015

OpenID.netRegistration Now Open for OIDF Workshop October 26, 2015 [Technorati links]

August 18, 2015 09:19 PM

Registration http://openid-workshop-oct-2015.eventbrite.com is now open for the OpenID Foundation Workshop being held on October 26, 2015, the Monday before the Fall IIW meeting) at Symantec’s HQ in Mountain View, CA. OpenID Foundation Workshops provide early insight and influence on widely adopted online identity standards like OpenID Connect. The workshop provides updates and hands-on tutorials on new OpenID Connect Self Certification Tests by developer Roland Hedberg and the UMEA University team. We’ll review progress on the MODRNA (Mobile Profile of OpenID Connect) as well as other protocols in the OIDF pipeline like RISC, HEART, Account Chooser and Native Applications. We hope to launch the new iGOV Work Group’s development of a profile of OpenID Connect for government applications. Leading technologists from Forgerock, Microsoft, Google, Ping Identity and the US Government will review work group progress and discuss how they enable new solutions for enterprise and government Internet identity challenges. Thanks to OpenID Foundation Board Members Roger Casals and Brian Berliner and Symantec for hosting the workshop.
Planned Agenda:
11:00 – 11:30 Introduction – Don Thibeau
11:30 – 12:00 OpenID Connect – Mike Jones, John Bradley, Nat Sakimura
12:00 – 01:00 Lunch
01:00 – 01:30 iGOV Profile of OpenID Connect – John Bradley, et. al
01:30 – 02:00 MODRNA (Mobile OpenID Connect Profile) – Torsten Lodderstedt, John Bradley
02:00 – 02:30 Break
02:30 – 03:00 Account Chooser – Pamela Dingle
03:00 – 04:00 RISC – Adam Dawes
04:00 – 04:30 Native Applications – Paul Madsen
04:30 – 05:00 Health Relationship Trust Profiles (HEART) – Deb Bucci, Eve Maler, HMG Cabinet Office Chairs
05:00 – 06:00 OpenID Connect Conformance Testing – Mike Jones and Roland Hedberg, UMEA University

CourionFoiled by the feds, and Facebook security issues- Its #TechTuesday [Technorati links]

August 18, 2015 12:26 PM

Access Risk Management Blog | Courion

blog.courion.com

August 17, 2015

Nat Sakimuraだだ漏れている企業ビッグデータ [Technorati links]

August 17, 2015 06:55 AM

どうやら、多くの企業の「ビッグデータ」はだだ漏れているようです。スイス・チューリヒのセキュリティ企業BinaryEdge[1] の調査の結果[2]、大量のデータがそのままインターネットにさらされているようなのです。その総量はなんと1.1ペタバイト。

同社の調査は、Fortune 500企業からベンチャーまで幅広い企業のインターネットに晒されているホストをスキャンし、公開されているMongoDB, Memchached, Elastic Search, Redis Cache などからメタデータを引き抜いてくるというものでした。(同社は、データ自体は取得していないことを明言しています。)

それによると、

が、認証・認可無くデータを全世界に晒していたようです[3]。また、古いバージョンのものもままあり、中にはサーバ乗っ取りを可能にするバージョンもあるとのこと。

この調査に対して、The Registerが更に取材をして記事[4]を書いています。それによると、メタデータの内容から、

  1. 「ユーザ名」「パスワード」「セッション・トークン」など;
  2. 医療機関のものには、「患者」「医師リスト」など;
  3. 銀行のものには、「coin」「money」など;
  4. ロボット製造業のものには、「設計図」「プロジェクト名」など;

の項目が晒されているとのことです。このブログの読者には、1. とか 2. とかが特に興味があるところでしょうか。パスワードが漏れているのは論外として、セッション・トークンも、セッション乗っ取りに使えそうです。また、「患者」などは医療データが晒されている可能性を示唆していますね。重大なプライバシー侵害の恐れがあります。

一方、経済的に一番深刻なのは4.かもしれません。「設計図」を晒すとは…。まぁ、オープンソースハードウェアなのかもしれませんが…。

なお、BinaryEdge社は、見つけた問題は当該企業に通知しているとのことです。同時に、継続的監視サービスも提案しているとのことです。

この調査結果が示唆しているのは、こうしたテクノロジーを使っている人たちが、セキュリティを確保することが重要であるということすら認識していなであろうということです。ちょうど、昭和40年代前半までの日本企業が、汚染水を垂れ流すことの重大なインパクトについて全く認識していなかったとおぼしきことと同じですね。教育では時間がかかりすぎますから、やはり何らかの規制なり課税なりが必要なのかもしれません。

[1] BinaryEdge https://binaryedge.io/

[2] Binary Edge:Data, Technologies and Security – Part 1 (2015-08-17), http://blog.binaryedge.io/2015/08/10/data-technologies-and-security-part-1/

[3] 上記の結果には、当該IPレンジをスキャンして欲しくないという企業は含まれていないそうなので、実際にはもっと多くのサーバがデータを晒していると思われるそう。

[4] Leyden, John:Misconfigured Big Data apps are leaking data like sieves, The Register (2015-08-13), http://www.theregister.co.uk/2015/08/13/big_data_apps_expose_data/

 

August 14, 2015

Mike Jones - Microsoft“amr” Values spec updated [Technorati links]

August 14, 2015 03:41 AM

OAuth logoI’ve updated the Authentication Method Reference Values spec to incorporate feedback received from the OAuth working group. Changes were:

The specification is available at:

An HTML formatted version is also available at:

August 13, 2015

Kantara InitiativeSpotlight on Kantara Member Kimble & Associates [Technorati links]

August 13, 2015 04:28 PM

K&A LogoIn this edition of Spotlight, we are pleased to tell readers more about Kimble & Associates, their unique role in IdM, as a Kantara Accredited Assessor, and why they became Members of Kantara Initiative.

 

1) Why was your company created, and how is it providing real-world value today? 

Kimble & Associates was created to develop technology and strategy solutions that, ultimately, make an impact on society as a whole. For over 15 years we’ve helped clients create and implement strategic plans for many complex, high profile initiatives. Clients include federal agencies, such as the White House and DARPA, to numerous cutting edge technology companies like Verizon, Lockheed Martin, and Honeywell.

Given the current state of the internet and personal information online, many of our most recent projects have focused on online privacy and security. By strengthening internet identity solutions, a vast number of additional services – such as those for healthcare, education and government – can be moved online in a more secure manner.

From crafting far reaching, multi-million dollar initiatives to leading results-driven, high performance teams, Kimble and Associates blends it’s passion for building strategic alliances and their commitment to sound, efficient leadership to deliver exemplary and steadfast client success.

2) Where is your organization envisioned to be strategically in the next 5-10 years?

Our strategy has always been focused on leveraging technology and strategy to make a difference in society. We plan to continue building long-term strategic relationships with our clients and broadening the reach of their services and solutions across many different markets.

3) Why did you join Kantara Initiative?

We are excited to be a part of the core process of improving online transactions. We feel that the work the Kantara Initiative does helps improve the security and privacy controls of a vast array technologies and user data which, ultimately, enhances usability. As Assessors we will work with our clients to help improve their security and privacy posture and ultimately enable new services for their products.

4) What else should we know about your organization, the service/product, or even your own experiences?

Over the past 15 years, Kimble and Associates has experience in helping clients develop and implement strategic solutions for a wide variety of cutting edge technology projects. Our work has included bringing forensic science and biometrics to crime laboratories and first responder teams across the nation, to working with DARPA on warfighter technologies, to spending the past 10 years focusing on the online identity issue. We’ve been instrumental in deploying smartcards across 80 US federal agencies and to over 800,000 citizens. Most recently, we’ve been working with the private and public sectors to write and implement the National Strategy for Trusted Identities in Cyberspace (NSTIC). We blend technological expertise with a passion for building strategic partnerships and brining sound, efficient leadership to address many of today’s important issues – healthcare, education, family identity and mobility – and we look forward to continue working with our clients to make an impact across these markets and more.

For more about our experience and projects, please visit www.kimbleassociates.com.

 

 

KatasoftToken Authentication for Java Applications [Technorati links]

August 13, 2015 03:00 PM

In my last post, we covered a lot of ground, including how we traditionally go about securing websites, some of the pitfalls of using cookies and sessions, and how to address those pitfalls by traditional means.

In this post we’ll go beyond the traditional and take a deep dive into how token authentication with JWTs (JSON Web Tokens) not only addresses these concerns, but also gives us the benefit of inspectable meta-data and strong cryptographic signatures.

Token Authentication to the Rescue!

Let’s first examine what we mean by authentication and token in this context.

Authentication is proving that a user is who they say they are.

A token is a self-contained singular chunk of information. It could have intrinsic value or not. We are going to look at a particular type of token that does have intrinsic value and addresses a number of the concerns with session IDs.

JSON Web Tokens (JWTs)

JWTs are a URL-safe, compact, self-contained string with meaningful information that is usually digitally signed or encrypted. They’re quickly becoming a de-facto standard for token implementations across the web.

URL-safe is a fancy way of saying that the entire string is encoded so there are no special characters and the token can fit in a URL.

The string is opaque and can be used standalone in much the same way that session IDs are used. By opaque, I mean that looking at the string itself provides no additional information.

However, the string can also be decoded to pull out-meta data and it’s signature can be cryptographically verified so that your application knows that the token has not been tampered with.

JWTs and Oauth2 Access Tokens

Many OAuth2 implementations are using JWTs for their access tokens. It should be stated that the OAuth2 and JWT specifications are completely separate from each other and don’t have any dependencies on each other. Using JWTs as the token mechanism for OAuth2 affords a lot of benefit as we’ll see below.

JWTs can be stored in cookies, but all the rules for cookies we discussed before still apply. You can entirely replace your session id with a JWT. You can then gain the additional benefit of accessing the meta-information directly from that session id.

In the wild, they look like just another ugly string:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOi8vdHJ1c3R5YXBwLmNvbS8iLCJleHAiOjEzMDA4MTkzODAsInN1YiI6InVzZXJzLzg5ODM0NjIiLCJzY29wZSI6InNlbGYgYXBpL2J1eSJ9.43DXvhrwMGeLLlP4P4izjgsBB2yrpo82oiUPhADakLs

If you look carefully, you can see that there are two periods in the string. These are significant as they delimit different sections of the JWT.

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
.
eyJpc3MiOiJodHRwOi8vdHJ1c3R5YXBwLmNvbS8iLCJleHAiOjEzMDA4MTkzODAsInN1YiI6InVzZXJzLzg5ODM0NjIiLCJzY29wZSI6InNlbGYgYXBpL2J1eSJ9
.
43DXvhrwMGeLLlP4P4izjgsBB2yrpo82oiUPhADakLs

JWT Structure

JWTs have a three part structure, each of which is base64-encoded:

jwt annotated

Here are the parts decoded:

Header

{
  "typ": "JWT",
  "alg": "HS256"
}

Claims

{
  "iss":"http://trustyapp.com/",
  "exp": 1300819380,
  "sub": "users/8983462",
  "scope": "self api/buy"
}

Cryptographic Signature

tß´—™à%O˜v+nî…SZu¯µ€U…8H×

JWT Claims

Let’s examine the claims sections. Each type of claim that is part of the JWT Specification can be found here.

iss is who issued the token. exp is when the token expires. sub is the subject of the token. This is usually a user identifier of some sort.

The above parts of the claim are all included in the JWT specification. scope is not included in the specification, but it is commonly used to provide authorization information. That is, what parts of the application the user has access to.

One advantage of JWTs is that arbitrary data can be encoded into the claims as with scope above. Another advantage is that the client can now react to this information without any further interaction with the server. For instance, a portion of the page may be hidden based on the data found in the scope claim.

NOTE: It is still critical and a best practice for the server to always verify actions taken by the client. If, for instance, some administrative action was being taken on the client, you would still want to verify on the application server that the current user had permission to perform that action. You would never rely on client side authorization information alone.

You may have picked up on another advantage: the cryptographic signature. The signature can be verified which proves that the JWT has not been tampered with. Note that the presence of a crytpographic signature does not guarantee confidentiality. Confidentiality is ensured only when the JWT is encrypted as well as signed.

Now, for the big kicker: statelessness. While the server will need to generate the JWT, it does not need to store it anywhere as all of the user meta-data is encoded right in to the JWT. The server and client could pass the JWT back and forth and never store it. This scales very well.

Managing Bearer Token Security

Implicit trust is a tradeoff. These types of tokens are often referred to as Bearer Tokens because all that is required to gain access to the protected sections of an application is the presentation of a valid, unexpired token.

You have to address issues like: How long should the token be good for? How will you revoke it? (There’s a whole other post we could do on refresh tokens.)

You have to be mindful of what you store in the JWT if they are not encrypted. Do not store any sensitive information. It is generally accepted practice to store a user identifier in the form of the sub claim. When a JWT is signed, it’s referred to as a JWS. When it’s encrypted, it’s referred to as a JWE.

Java, JWT and You!

We are very proud of the JJWT project on Github. Primarily authored by Stormpath’s own CTO, Les Hazlewood, it’s a fully open-source JWT solution for Java. It’s the easiest to use and understand library for creating and verifying JSON Web Tokens on the JVM.

How do you create a JWT? Easy peasy!

import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;

byte[] key = getSignatureKey();

String jwt = 
    Jwts.builder().setIssuer("http://trustyapp.com/")
        .setSubject("users/1300819380")
        .setExpiration(expirationDate)
        .put("scope", "self api/buy") 
        .signWith(SignatureAlgorithm.HS256,key)
        .compact();

The first thing to notice is the fluent builder api used to create a JWT. Method calls are chained culminating in the compact call which returns the final JWT string.

Also notice that when we are setting one of the claims from the specification, we use a setter. For example: .setSubject("users/1300819380"). When a custom claim is set, we use a call to put and specify both the key and value. For example: .put("scope", "self api/buy")

It’s just as easy to verify a JWT.

String subject = "HACKER";
try {
    Jws<Claims> jwtClaims = 
        Jwts.parser().setSigningKey(key).parseClaimsJws(jwt);

    subject = claims.getBody().getSubject();

    //OK, we can trust this JWT

} catch (SignatureException e) {

    //don't trust the JWT!
}

If the JWT has been tampered with in any way, parsing the claims will throw a SignatureException and the value of the subject variable will stay HACKER. If it’s a valid JWT, then subject will be extracted from it: claims.getBody().getSubject()

What is OAuth?

In the next section, we’ll look at an example using Stormpath’s OAuth2 implementation, which makes use of JWTs.

There’s a lot of confusion around the OAuth2 spec. That’s, in part, because it is really an über spec – it has a lot of complexity. It’s also because OAuth1.a and OAuth2 are very different beasts. We are going to look at a very specific, easy to use, subset of the OAuth2 spec. We have an excellent post that goes into much more detail on What the Heck is OAuth. Here, we’ll give some brief background and then jump right into the examples.

OAuth2 is basically a protocol that supports authorization workflows. What this means is that it gives you a way to ensure that a specific user has permissions to do something.

That’s it.

OAuth2 isn’t meant to do stuff like validate a user’s identity — that’s taken care of by an Authentication service. Authentication is when you validate a user’s identity (like asking for a username / password to log in), whereas authorization is when you check to see what permissions an existing user already has.

Just remember that OAuth2 is a protocol for authorization.

Using OAuth Grant Types for Authorization

Let’s look at a typical OAuth2 interaction.

POST /oauth/token HTTP/1.1
Origin: https://foo.com
Content-Type: application/x-www-form-urlencoded

grant_type=password&username=username&password=password

grant_type is required. The application/x-www-form-urlencoded content type is required for this type of interaction as well. Given that you are passing the username and password over the wire, you would always want the connection to be secure. The good thing, however, is that the response will have an OAuth2 bearer token. This token will then be used for every interaction between the browser and server going forward. There is a very brief exposure here where the username and password are passed over the wire. Assuming the authentication service on the server verifies the username and password, here’s the response:

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache

{
    "access_token":"2YotnFZFEjr1zCsicMWpAA...",
    "token_type":"example",
    "expires_in":3600,
    "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA...",
    "example_parameter":"example_value"
}

Notice the Cache-Control and Pragma headers. We don’t want this response being cached anywhere. The access_token is what will be used by the browser in subsequent requests. Again, there is not direct relationship between OAuth2 and JWT. However, the access_token can be a JWT. That’s where the extra benefit of the encoded meta-data comes in. Here’s how the access token is leveraged in future requests:

GET /admin HTTP/1.1
Authorization: Bearer 2YotnFZFEjr1zCsicMW...

The Authorization header is a standard header. No custom headers are required to use OAuth2. Rather than the type being Basic, in this case the type is Bearer. The access token is included directly after the Bearer keyword. This completes the OAuth2 interaction for the password grant type. Every subsequent request from the browser can use the Authorizaion: Bearer header with the access token.

There’s another grant type known as client_credentials which uses client_id and client_secret, rather than username and password. This grant type is typically used for API interactions. While the client id and slient secret function similarly to a username and password, they are usually of a higher quality security and not necessarily human readable.

Take Us Home: OAuth2 Java Example

We’ve arrived! It’s time to dig into some specific code that demonstrates JWTs in action.

Spring Boot Web MVC

There are a number of examples in the Stormpath Java SDK. Here, we are going to look at a Spring Boot Web MVC example. Here’s the HelloController from the example:

@RestController
public class HelloController {

    @RequestMapping("/")
    String home(HttpServletRequest request) {

        String name = "World";

        Account account = AccountResolver.INSTANCE.getAccount(request);
        if (account != null) {
            name = account.getGivenName();
        }

        return "Hello " + name + "!";
    }

}

The key line, for the purposes of this demonstration is:

Account account = AccountResolver.INSTANCE.getAccount(request);

Behind the scenes, account will resolve to an Account object (and not be null) ONLY if an authenticated session is present.

Build and Run the Example Code

To build and run this example, do the following:

☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java (master|8100m)
➥ cd examples/spring-boot-webmvc/
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8100m)
➥ mvn clean package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Stormpath Java SDK :: Examples :: Spring Boot Webapp 1.0.RC4.6-SNAPSHOT
[INFO] ------------------------------------------------------------------------

... skipped output ...

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.865 s
[INFO] Finished at: 2015-08-04T11:46:05-04:00
[INFO] Final Memory: 31M/224M
[INFO] ------------------------------------------------------------------------
☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8100m)

Launch the Spring Boot Example

You can then launch the Spring Boot example like so:

☺ dogeared jobs:0 ~/Projects/StormPath/stormpath-sdk-java/examples/spring-boot-webmvc (master|8104m)
➥ java -jar target/stormpath-sdk-examples-spring-boot-web-1.0.RC4.6-SNAPSHOT.jar

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.2.1.RELEASE)

2015-08-04 11:51:00.127  INFO 17973 --- [           main] tutorial.Application                     : Starting Application v1.0.RC4.6-SNAPSHOT on MacBook-Pro.local with PID 17973 

... skipped output ...

2015-08-04 11:51:04.558  INFO 17973 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2015-08-04 11:51:04.559  INFO 17973 --- [           main] tutorial.Application                     : Started Application in 4.599 seconds (JVM running for 5.103)

NOTE: This assumes that you’ve already setup a Stormpath account and that your api keys are located in ~/.stormpath/apiKey.properties. Look here for more information on quick setup up of Stormpath with Spring Boot.

Authenticate with a JSON Web Token (or Not)

Now, we can exercise the example and show some JWTs in action! First, hit your endpoint without any authentication. I like to use httpie, but any command line http client will do.

➥ http -v localhost:8080
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2


HTTP/1.1 200 OK
Accept-Charset: big5, big5-hkscs, cesu-8, euc-jp, euc-kr, gb18030, ... 
Content-Length: 12
Content-Type: text/plain;charset=UTF-8
Date: Tue, 04 Aug 2015 15:56:41 GMT
Server: Apache-Coyote/1.1

Hello World!

The -v parameter produces verbose output and shows all the headers for the request and the response. In this case, the output message is simply: Hello World!. This is because there is not an established session.

Authenticate with the Stormpath OAuth Endpoint

Now, let’s hit the oauth endpoint so that our server can authenticate with Stormpath. You may ask, “What oauth endpoint?” The controller above doesn’t indicate any such endpoint. Are there other controllers with other endpoints in the example? No, there are not! Stormpath gives you oauth (and many other) endpoints right out-of-the-box. Check it out:

➥ http -v --form POST http://localhost:8080/oauth/token  \
> 'Origin:http://localhost:8080' \
> grant_type=password username=micah+demo.jsmith@stormpath.com password=<actual password>
POST /oauth/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: localhost:8080
Origin: http://localhost:8080
User-Agent: HTTPie/0.9.2

grant_type=password&username=micah%2Bdemo.jsmith%40stormpath.com&password=<actual password>

HTTP/1.1 200 OK
Cache-Control: no-store
Content-Length: 325
Content-Type: application/json;charset=UTF-8
Date: Tue, 04 Aug 2015 16:02:08 GMT
Pragma: no-cache
Server: Apache-Coyote/1.1
Set-Cookie: account=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4; Expires=Wed, 05-Aug-2015 16:02:08 GMT; Path=/; HttpOnly

{
    "access_token": "eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4",
    "expires_in": 259200,
    "token_type": "Bearer"
}

There’s a lot going on here, so let’s break it down.

On the first line, I am telling httpie that I want to make a form url-encoded POST – that’s what the --form and POST parameters do. I am hitting the /oauth/token endpoint of my locally running server. I specify an Origin header. This is required to interact with Stormpath for the security reasons we talked about previously. As per the OAuth2 spec, I am passing up grant_type=password along with a username and password.

The response has a Set-Cookie header as well as a JSON body containing the OAuth2 access token. And guess what? That access token is also a JWT. Here are the claims decoded:

{
  "jti": "14426d13-f58b-4a41-bede-0b343fcd1ac0",
  "iat": 1438704128,
  "sub": "https://api.stormpath.com/v1/accounts/5oM4WI3P4xIwp4WiDbRj80",
  "exp": 1438963328
}

Notice the sub key. That’s the full Stormpath URL to the account I authenticated as. Now, let’s hit our basic Hello World endpoint again, only this time, we will use the OAuth2 access token:

➥ http -v localhost:8080 \
> 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4'
GET / HTTP/1.1
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxNDQyNmQxMy1mNThiLTRhNDEtYmVkZS0wYjM0M2ZjZDFhYzAiLCJpYXQiOjE0Mzg3MDQxMjgsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNW9NNFdJM1A0eEl3cDRXaURiUmo4MCIsImV4cCI6MTQzODk2MzMyOH0.wcXrS5yGtUoewAKqoqL5JhIQ109s1FMNopL_50HR_t4
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/0.9.2



HTTP/1.1 200 OK
Content-Length: 11
Content-Type: text/plain;charset=UTF-8
Date: Tue, 04 Aug 2015 16:44:28 GMT
Server: Apache-Coyote/1.1

Hello John!

Notice on the last line of the output that the message addresses us by name. Now that we’ve established an authenticated session with Stormpath using OAuth2, these lines in the controller retrieve the first name:

Account account = AccountResolver.INSTANCE.getAccount(request);
if (account != null) {
    name = account.getGivenName();
}

Summary: Token Authentication for Java Apps

In this post, we’ve looked at how token authentication with JWTs not only addresses the concerns of traditional approaches, but also gives us the benefit of inspectable meta-data and strong cryptographic signatures.

We gave an overview of the OAuth2 protocol and went through a detailed example of how Stormpath’s implementation of OAuth2 uses JWTs.

Here are some other links to posts on token based authentication, JWTs and Spring Boot:

Token Based Authentication for Angular.js

JJWT – JSON Web Token for Java and Android

Spring Boot Webapp Sample Quickstart

JWT Specification

Feel free to drop a line over to email or to me personally anytime.

Like what you see? to keep up with the latest releases.

KatasoftThe Problem with Securing Single Page Applications [Technorati links]

August 13, 2015 03:00 PM

We talk a lot about Token Authentication, but before diving into the details of how to use tokens, it’s critical for developers to understand the underlying security issues. Why do tokens matter and what types of vulnerabilities they protect an application from?

“Problem” is such a negative word. Let’s say that Single Page Applications (SPAs) and mobile webapps present new security “challenges”. We call these types of applications “untrusted clients” since our server-side code has no control over the environment they run in. Even regular web applications have these issues. People can easily alter or inject javascript code on a page through the developer console. Mobile apps, such as those on Android and iOS, can be decompiled and inspected. As such, you would not want to embed sensitive information like secret keys or passwords in these types of clients.

In this post, I will cover some of the best techniques to secure webapps and how to handle the pitfalls with those approaches. This post applies to all modern programming languages.

Buckle up – we’ve got a lot of ground to cover. Lets get started!

Security Concerns for Modern Webapps

The primary goal of web application security is to prevent malicious code from running in our applications. We want to ensure user credentials are sent to our servers in a secure way. We want to secure our API endpoints. As a bonus, we want to expose access control rules to the client.

The following sections address each of these concerns.

Cross Site Scripting (XSS) Attacks

XSS attacks occur when a vulnerable application allows arbitrary javascript code to be injected and executed on your page. This often occurs through the form fields on the page.

The Open Web Application Security Project (OWASP) pages are an excellent resource for information on XSS attacks, as well as other types of web client vulnerabilities and remedies.

In the clip below, you can see this behavior in action. This is taken from the app security live example page.

xss

I put script tags into the search field of the page form. Since this site is not protected against XSS attacks, it goes ahead and executes that script code, resulting in the alert popup.

xss<script>alert('hi!');</script>

The problem here is that on this page, anything that’s put into the input field is sent back to the page and rendered verbatim.

There’s a great cheat sheet on owasp for how to prevent XSS.

In a nutshell, the remedy for XSS is to escape all user input. On the cheatsheet referenced above, there are links to a number of XSS protection libraries. It’s best to use an existing, trusted and open source library for this. You definitely do not want to “roll your own” as a lot of due diligence has been done on a mature library. You could miss vectors of attack or even introduce new ones writing your own escaping library.

A number of popular frameworks, such as AngularJs, have XSS protections out-of-the-box, but you should still understand what’s included in that protection.

Secure User Credentials

Traditionally, users enter their authentication information in the form of a username and password and transmit that information up to the application server (hopefully in a secure fashion) as an HTTP POST. Assuming the credentials are correct, the application server creates a unique session id to identify the user and sends it back in the form of a Set-Cookie header on the response. On each subsequent request from the user, that session id is presented in the request in the form of a Cookie header. Here’s what this looks like:

session cookie

The use of the session id accomplishes a few important goals:

  1. The userid and password do not need to be sent up to the application server on subsequent requests. The session id can become a proxy to represent the user.
  2. The application server typically uses the session id to store information about the user, such as name, permissions, and other meta-data

A similar process is used to secure API endpoints with session IDs. Java security frameworks like Apache Shiro and Spring Security make use of annotations to express how web applications and APIs should be secured. They do much of the heavy lifting involved in managing session data as well. This includes storing, retrieving, and expiring sessions and their associated data.

In order to get at the user information identified by the session id, an additional round trip on the network is required. Endpoints, such as /me or /profile are commonly used to accomplish this. The session id itself does not contain any information that can be used by a client, such as a browser.

These are among the drawbacks and challenges to this approach that we’ll address in the next post. We think that authentication tokens address these issues better, but more on that later.

Use Cookies the Right Way

Cookies are ok, if done correctly. They can be compromised in a number of ways. We are going to look at two of these vulnerabilities in detail.

Man-in-the-Middle (MITM) Attacks

Man-in-the-middle refers to a situation where you believe you are connecting to a particular server, but in reality there is another “listener” in between you and your intended server. That listener, which you are actually connected to, intercepts your communication and usually will replay it to the server you intended to reach. This is what makes it seem like you are connected to where you intended to go – the listener is ferrying data back and forth between you and the server, all the while saving data or even altering responses from your intended server.

Watch out for the scenario where you establish a secure connection with HTTPS and then downgrade that connection back to HTTP. This is never safe. Once the connection is downgraded, the session id will be passed in the clear on the network – such as that cozy coffee shop you are sitting in – and anyone listening in would be able to use that id. This is a variation on the typical man-in-the-middle attack. The goal is to get a hold of your session id and then use that id to impersonate you on the website to which you are authenticated. This is called hijacking the session.

The remedy here is to use HTTPS everywhere and to use TLS even on internal networks. This last point is important to guard against other attack vectors. For instance, log files and database dumps pose a vulnerability for an out-of-band attack.

If your webserver is very secure, but you log session IDs to a log file and you save those log files in a less secure place, attackers can hijack sessions by getting a hold of that backed up log file. Likewise, if your database is very secure, but your dumpfiles are backed up to a less secure location, attackers can brute-force crack passwords at their leisure if they are able to get a hold of a database dump file.

Cross Site Request Forgery (CSRF)

"... occurs when a malicious web site, email, blog, instant message or program causes a user’s web browser to perform an unwanted action on a trusted site for which the user is currently authenticated"

from: Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet

CSRF occurs when a malicious site has a link or form that connects to another site that you are already logged in to. Here’s an example scenario:

  1. You log in to your bank account at https://myficticiousbank.com, as you normally would to review your balance and transactions
  2. You don’t log out
  3. You get an email from your buddy that has a link in that says: “See cute cats”
  4. Unbeknownst to you, that link connects back to your bank’s website and performs a transaction to send money to Mr. Bad Guy

How is this possible? Firstly, this exact scenario is very unlikely because banks are very familiar with CSRF and protect against it. Assuming that wasn’t the case for the purposes of this example, it’s possible because you have an active session with your bank. The attacker has no knowledge of your session id or any other cookies. The attacker is just counting on the chance that you didn’t log out of your session. When you click the link, the browser happily sends along the cookies representing your session since there is already session associated with the domain you are now connecting to. Let’s take a look at what the link in that email might look like:

<a href="https://myficticiousbank.com/transfer?to=MrBadGuy&amount=10000">
    See Cute Cats!
</a>

All you see is the link to click on. Once clicked, you’re not going to see cute cats at all! You will be back at your bank’s website, probably confused as to why you just transferred all your money to Mr. Bad Guy.

There are three primary remedies for CSRF that we will examine now:

Synchronizer Token

With the Synchronizer Token approach, the server embeds a dynamic hidden variable in an input form. When the form is submitted, the server can check to make sure that the hidden variable is present and that it is the correct value. Let’s say you are on a trusted travel site and you are about to book a vacation around the world. Here’s how the “buy” form looks:

synchronizer token success

Now, let’s say you get an email with a link to book the vacation of your dreams with your trusted travel site. The link actually connects to a hacker site that’s trying to get you to use your trusted vacation site to book travel for them! The form looks the same as your trusted travel site and you don’t notice that the url is different. However, since your trusted travel site has implemented a Synchronizer Token approach to defeating CSRF attacks, when you click the Buy button, the transaction fails. There’s no way for the hacker site to know what the correct token should be. When the hidden token field is not present on the form submit, the trusted travel site fails the transaction.

synchronizer token failure

With the Synchronzier Token approach, you can use the same token over again, but it’s better to have it be a nonce – that is a one-time use token. Using nonces prevent replay attacks.

There are a few considerations in the Synchronizer Token approach. Your rendering layer will have to cooperate in some way in order to place the token in the hidden field on your form. View templating frameworks, such as Thymeleaf, provide this functionality out-of-the-box. Synchronizer Tokens necessarily have to be kept in some sort of data store or cache. This can lead to other challenges at scale, such as having to propagate token IDs across a cluster of web servers. It’s challenging to accomplish the Synchronizer Token approach for Single Page Applications (SPAs). SPAs are typically pre-compiled, static pages with a pile of Javascript used to accomplish updating parts of the DOM. Using Synchronizer Tokens requires being able to generate the hidden token field on the form. Finally, Synchronizer Tokens only protect against forged POST requests. This isn’t a real problem as long as you adhere to the idempotent nature of GET requests that are baked into the HTTP spec: GET requests should never modify server state.

Double Submit Cookie

With the Double Submit Cookie approach, two cookies are sent back to the browser. One is the session id and the other is a random value (similar to the synchronizer token). There are two keys to this mechanism working. The first is a mechanism built into the browser called the Same Origin Policy. This permits the script code to interact with other server endpoints only if those endpoints have the same origin (base URI) as the endpoint that delivered said script code. You might be asking yourself, “If one cookie isn’t secure on its own, how are two cookies going to be more secure?” They key is in the second enabling mechanism: Having the second cookie included in subsequent requests in a custom header. It is up to your client script code to ensure that this is setup properly. Here’s how the interaction works:

double submit cookie

When you request the login page, two cookies are sent back by the server. The second cookie is used in a custom header (X-XSRF-Token in this case, but it could be anything) for subsequent requests from the browser. The server checks for the existence of the custom header and checks the value against what was sent for that page.

Similar to the Synchronizer Token approach, an external site trying to spoof a page and trick you into submitting data to an active session, would fail as it would not be able to set the custom header for a site at a different URL.

Origin header check

All browsers, including Internet Explorer 9 and later, send an Origin header in their requests. This header is not allowed to be set by Javascript which gives you a high degree of confidence that the browser has the right information in that header. The server can then explicitly check that header and verify that the base URI matches the server’s. If the server is set to reject browser requests where the base URI in the Origin header does not match the expected value, then a third-party site trying to spoof the look of your page would be foiled as the Origin set in the browser would be different than what was expected.

Here’s what it looks like:

origin header

When I submit the form to register for a Stormpath account, the browser automatically includes the Origin: https://api.stormpath.com header. Stormpath’s servers can check for that header and reject the request if the value of the Origin header is something else.

This section on the remedies for Cross Site Request Forgery has focused primarily on securing the browser. We are next going to look at session IDs themselves with an eye to the server side of the interactions and how we can secure them.

Session ID Challenges

The session IDs we’ve been looking at so far, usually managed in the form of cookies, have a number of challenges associated with them. Of primary importance is that as your infrastructure grows, you may find it difficult for your session mechanism to grow with you.

Imagine you start out with one application server. It manages sessions by saving them to a local datastore, such as redis. Your service takes off and you need three application servers to handle the load. Now, you are in a situation where the application server a user connects with to start their session may not be the same application that user connects with to continue their session. You may find yourself needing a whole centralized session id de-referenceing service. That is, a service to ensure that all sessions are kept in sync across all of your application servers. This is a challenging issue at scale.

Even on a single application server instance, there’s a cost with session IDs. The user and session data associated with that id has be stored. It also must be referenced on each and every interaction with the application server. This can be costly in terms of slower resources, such as persisting sessions to disk or in the memory required to keep this data cached.

Session IDs have no inherent value other than as a unique identifier. A client, such as your web browser, cannot inspect the session id to find out what you are allowed to do in the application. Separate requests are needed to get that authorization information.

This is where Token Authentication comes in.

Use Token Authentication To Secure Your Single-Page Application

In the sequel to this post, we’ll dive into how Token Authentication can be used to address these issues and more. We’ll focus on how JSON Web Tokens (JWTs) can not only be used as a session identifier, but also contains encoded meta-data and is cryptographically signed. We’ll see this in action in a Java code example.

Java developers can see these techniques in action in my tutorial on Token Authentication for Java Web Applications – it covers how your Java app can benefit from token auth and walks through a Java example available in the Stormpath Java SDK repo, and show you how to use tokens in your own Java application.

Feel free to drop a line over to email or to me personally anytime.

Like what you see? to keep up with the latest releases.

CourionMulti-Authentication Passwords: The New Normal [Technorati links]

August 13, 2015 12:25 PM

Access Risk Management Blog | Courion

This week we are proud to present a spotlight blog from one of our trusted partners, Mr. Andy Osburn at SecureReset. With over 15 years of experience in network password reset, Andy and his team are an integral part of what makes Courion great. Take it away Andy! 

Andy Osburn

Andy Osburn, Secure Reset

You can’t throw a digital rock in the IT security blogspace without hitting an article concerning the risks and consequences related to password compromise. This attention is well-placed given the numerous high profile cases of data theft and reputational losses that can be traced back to either weak or stolen passwords.

The recognition of the inherent risk in any single-factor authentication method is not new. In 2001, the US Federal Financial Institutions Examination Council (FFIEC) issued guidance on authentication in the electronic banking environment, identified the risks and controls, and concluded that, “single factor authentication alone may not be commercially reasonable or adequate for high risk applications and transactions.”This reality has generated a wider call to move beyond authentication, security’s reliance on passwords, and their ever-increasing complexity and rotation. When employed as a single-factor to verify identity and grant access to critical enterprise resources, the overwhelming conclusion is that the password is simply not good enough.

The FFIEC went further to advocate the use of multi-factor authentication (MFA) where two or more of the three basic factors are used in combination. 

So it begs the question: if the risks, consequences, and potential solutions have been known for 15+ years, why has there not been wider adoption and usage of MFA? Thumbprint scan

Well, the answer lies in the fact that the implementationof additional authentication control methods in the IT Security environment must take into account many considerations, not the least of which is user experience, cost, and convenience. 

Early MFA solutions that incorporated smart cards, biometric scanners, and hardware tokens, in addition to knowledge authentication, made significant strides in elevating the security of user authentication. However, the relative complexity and inconvenience of these MFA solutions hampered widespread adoption in the enterprise marketplace. This experience, together with the relatively high lifecycle management costs of the solutions, limited the scope of usage to environments requiring higher-end authentication security.

So what has changed in this intervening period through to today’s reality of enterprise environments and authentication challenges? Two things: the first of which is the acceptance of the high risk inherent in single-factor authentication and the corresponding potential for significant data and reputational losses.  The second is the ubiquity of the mobile smart device.

Each of us now carry a mobile device that has tremendous capability to behave as a security token. Not only is there exceptional computing capacity, but perhaps even more importantly, we as users are now completely comfortable with employing these devices for a myriad of daily common routines. It is only natural that we now look to use these devices as part of an enterprise MFA strategy.

 

mobile device

This new mobile MFA capability is being reflected in the products available to enterprise customers from Courion partners such as QuickFactor and Ping Identity. Both companies are members of the FIDO ("Fast Identity Online") Alliance which is an industry organization created to address the lack of interoperability among strong authentication devices and the problems users face creating and remembering multiple usernames and passwords. 

These advances in mobile products and standards means that the new reality of enterprise user authentication strikes a better balance between security and convenience. End users have more flexible authentication choices where the enterprise can now leverage the significant capabilities of mobile authentication with three true factors.

Coming full circle then, it is unlikely that the password will completely go away. However, it is equally unlikely that it will continue to exist in the familiar form as we know it today.  What we can expect to see is that the password will play a role as a one-time-use or rotating knowledge-based authentication component of the mobile MFA model. When employed wisely in an MFA structure, the password can still prove to be a valuable authentication factor.

For more information on how Courion works with SecureReset to create the most innovative and industry leading technology, read more on our datasheet or click here for information on SecureReset and our other partners.

blog.courion.com

Radovan Semančík - nLightOracle Security [Technorati links]

August 13, 2015 10:48 AM

There are not many occasions when a CxO of a big software company speaks openly about sensitive topics. Few days ago that happened to Oracle. Oracle's CSO Mary Ann Davidson posted a blog entry about reverse engineering of Oracle products. Although it was perhaps not the original intent of the author, the blog post quite openly described several serious problems of closed-source software. That might be the reason why the post was taken down very shortly after it was published. Here is Google cached copy and a copy on seclist.org.

So, what are the problems of closed-source software? Let's look at the Davidson's post:

"A customer can’t analyze the code ...". That's right. The customer cannot legally analyze the software that is processing his (sensitive) data. Customer cannot contract independent third party do to this analysis. Customer must rely on the work done by the organizations that the vendor choses. But how independent are these organization if the vendor is selecting them and very often the vendor pays them?

"A customer can’t produce a patch for the problem". Spot-on. The customer is not allowed to fix the software. Even if the customer has all the resources and all the skills he cannot do it. The license does not allow fixing a broken thing. Only vendor has the privilege to do that. And customer is not even allowed to fully check the quality of the fix.

"Oracle’s license agreement exists to protect our intellectual property." That's how it is. Closed-source license agreements are here to protect the vendors. They are not here to make the software better. They are not here to promote knowledge or cooperation. They are not here to prevent damage to the software itself or to the data processed by the software. They are not helping the customer in this way. Quite the contrary. They are here for the purpose of protecting vendor's business.

In the future the children will learn about the historical period of early 21st century. The teacher might mention the prevailing business practices as a curiosity to attract the attention of the class. The kids won't believe that people in the past agreed to such draconian terms that were know as "license agreement".

(Reposted from Evolveum blog)

Vittorio Bertocci - MicrosoftADAL 3 didn’t return refresh tokens for ~5 months… and nobody noticed [Technorati links]

August 13, 2015 07:28 AM

JohnOliver1150070

As you know, ADAL is not meant to be a protocol library. You tell us about your client app and the resource you want to access; we get the proper tokens for you from Azure AD, via few simple primitives and without burdening with nitty-gritty protocol details.

That said… that arrangement is not 100% air tight. We do occasionally leak the abstraction: for example, we use OAuth specific terminology here and there (redirect uri…); we accept protocol parameters in extraqueryparameters; and so on. When we do so, we like to think it’s never an oversight, but a deliberate decision. We usually weigh whether the convenience of providing access to lower level constructs outperforms the complexity we’d burden you with for preserving a façade… if the convenience comes out a winner, we go for it: after all ADAL is not a science experiment, it’s something meant to make your life easier.

The refresh token in the AuthenticationResults , and corresponding AcquireTokenByRefreshToken method, is one such violation. You don’t really ever need to use the refresh token from your own app code, given that ADAL caches and will automagically use it whenever you call AcquireToken and the requested token need renewing. Ssee this and this for details. In ADAL v1 that wasn’t strictly true, given that the cache was specialized for native clients and not really suited for server side use: we had to provide you with the refresh token bits – so that you could do your own arrangements on web sites code-behind or any scenario other that the supported native client case. In ADAL v2 we improved the caching infrastructure to support server side scenarios, extending ADAL’s automatic and transparent use of the refresh token to all the mid tier flows (or, if you want me to leak protocol details… for all confidential client grants). However we weren’t completely certain that this would have addressed ALL possible scenarios, so we decided to keep exposing the refresh token in our object model.

That’s when we started noticing odd things. Developers with little or no protocol knowledge, the vast majority, happily relied on ADAL to do all the session management on their behalf and blissfully enjoyed all the automatic refresh token usage I described. So did all the people who used our GitHub samples or the VS templates as starting point for their own apps. However some developers, typically the ones with some existing protocol knowledge, skipped the samples altogether and attempted to use the library only “by intellisense”: knowing that OAuth2 does use refresh tokens, assuming that ADAL uses OAuth2 and finding methods accepting refresh tokens made them conclude that responsibility of storing and using refresh tokens was on their app code. That led to tons of extra work, code structure far more complicated than necessary, security issues and reduced functionality – for example, not all of those devs knew what a MRRT is.

Thankfully the number of the developers falling in that trap was very small in comparison to the total ADAL usage, but that made us think – do we really need to keep returning the refresh token and accept it in AcquireToken? We combed through mail threads, forum posts and customer docs searching for scenarios that could not be addressed by ADAL’s automatic usage of the refresh token – and found none. On April the 22nd we built a NuGet for ADAL v3.x with all signs of refresh tokens removed from ADAL’s programming surface, then pushed it out on NuGet.org.

Five months later, things seem to be going still pretty well. It is not entirely true that absolutely nobody noticed – but the couple of people who did, turned out to be perfectly well served by ADAL’s automatic use of cached refresh tokens.

Per the above, at this point we are reasonably confident that we can ship ADAL 3.x without leaking refresh tokens – and that’s good, because the moment of shipping is getting closer and closer (nope, I can’t share the exact date yet – sorry!).

That said, it is always possible that we missed some important scenario. That’s where YOU come into play. If you are using ADAL v2 and you are relying directly on refresh tokens bits anywhere in your code, please get in touch with us – we would like to understand whether this means we need to bring refresh tokens back after all, and if there is a way of achieving the functionality we need using ADAL cache… we would love the opportunity to show you how.

Thanks in advance for all your feedback, and happy coding!

August 12, 2015

Kantara InitiativeKantara Initiative’s Trust Status List Grows [Technorati links]

August 12, 2015 09:39 PM

PISCATAWAY, NJ–(August, 2015) – Kantara Initiative is proud to announce that Kimble & Associates is now a Kantara-Accredited Assessor with the ability to perform Kantara Service Assessments at Assurance Levels 1, 2 and 3, in the USA jurisdiction.

Additionally, Electrosoft, one of the Kantara Accredited Assessors received a new grant of Trustmark to continue performing Kantara Service Assessments at Assurance Levels 1, 2, 3 and 4, in the USA and Canada jurisdictions.

Joni Brennan, Kantara Executive Director said, “The Kantara Accredited Assessor community of experts is growing and we are delighted to welcome Kimble & Associates as the newest Kantara Accredited Assessor. Furthermore, Electrosoft has garnered re-Accreditation demonstrating proven leadership.”

Ray Kimble, President of Kimble & Associates said “With this certification, we’re excited to continue to drive the vision of a more secure identity ecosystem. By aligning with initiatives like Connect.gov, NSTIC and FICAM, businesses and government agencies have more secure and privacy-enhancing choices to enable next generation identity solutions and offer streamlined access for the public. At the end of the day, it’s about making it easier and more secure to conduct business online.”

To learn more about Kimble & Associates, please visit www.kimbleassociates.com

Dr. J. Greg Hanson, Electrosoft’s Vice President of Operations said “Electrosoft is proud to renew our role as a Kantara Accredited Assessor.  We view Kantara’s position as a trust framework provider as critical to enabling the widespread adoption of reusable strong online credentials. As Electrosoft has done for the previous three years for our Kantara clients, we look forward to providing both assessment and consulting support to help grow this secure and trusted community.”

To learn more about Electrosoft, please visit www.electrosoft-inc.com

Kantara Initiative, as a global organization, represents a neutral platform to address the converging Identity Ecosystem interests of the private sector and governments. Kantara Initiative provides spaces for development of industry policies and standards, with the aim of innovating solutions that apply at the global scale.

Kantara Initiative, a US Federal Trust Framework Provider, Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems.

Kantara Initiative members have the opportunity to profile the core Identity Assurance Framework for applicability to their specific communities of trust. Industry leaders join Kantara to increase industry visibility, create strategic relationships, follow the international trends, access to Identity Management tech and policy experts, benefit from a lightweight set of tools and policies to support work streams, and gain access to international organizations OECD, ISO & ITU-T, among others.

About Kantara Initiative

Kantara Initiative is a membership non-profit organization that provides strategic vision and real world innovation for the digital identity transformation.  Developing initiatives including: Identity Relationship ManagementUser Managed Access (EIC Award Winner for Innovation in Information Security 2014), Identities of Things, and Minimum Viable Consent Receipt, Kantara Initiative connects a global, open, and transparent leadership community. Kantara Initiative is an industry and community organization that enables trust in identity services through our compliance programs, requirements development, and information sharing among communities including: industry, research & education, government agencies and international stakeholders.

 

August 11, 2015

CourionA flaw in your fingerprint and a living like the Jetson's- its #TechTuesday [Technorati links]

August 11, 2015 12:10 PM

Access Risk Management Blog | Courion

blog.courion.com

August 10, 2015

Nat SakimuraHTCスマホに指紋画像を誰でも読み出し可能な脆弱性〜株価急落 [Technorati links]

August 10, 2015 05:53 PM
The HTC One Maxの指紋読取装置が指紋を誰でも読める形で保存していたことが発覚 写真提供: HTC

The HTC One Maxの指紋読取装置が指紋を誰でも読める形で保存していたことが発覚 写真提供: HTC

The guardian の記事[1]によりますと、HTCのスマホが利用者の指紋画像を誰でも読み出せる形で保存していたことが発覚したようです。発見したのはFireEyeの4人の研究者達で、8月5日にBlackHat[2]で論文[3]が発表されました 。指紋画像は /data/dbgraw.bmp に暗号化もされず World Readable でおいてあるそうです。したがって、アプリ等から自由に読み出せるとのこと。

この発見のあと、HTCの株価は2割近く急落、その時価総額は解散価値を下回っている[4]とのことです。

発覚後、HTCの株価は急落

発覚後、HTCの株価は急落

このセキュリティホールはHTCのものですが、Samsungを含む多くのスマホメーカーは、ARMなどが提供する組み込みのセキュリティ機能を使用していないため、攻撃者は自由に、かつ気づかれることなく、秘密裏に利用者の指紋を読み取り続けることができるそうです。

パスワード窃盗は大きな問題になっていますが、生体データ〜特に生の生体データ〜の窃盗はそれよりも遥かに重大な問題をはらんでいます。これらは、パスワードと違って取り替えることができないからです。その結果、パスワード以上に深刻なIdentity窃盗問題を産みかねません。より慎重な取扱が求められます。

[1] The Guardian: “HTC stored user fingerprints as image file in unencrypted folder”, (2015/8/10)  http://www.theguardian.com/technology/2015/aug/10/htc-fingerprints-world-readable-unencrypted-folder

[2] BlackHat Briefings – August 5-6, https://www.blackhat.com/us-15/briefings.html

[3] Zang, Y., Zhaofeng, C., Xue, H., Wei, T.: “Fingerprints On Mobile Devices: Abusing and Leaking”, (2015/8) https://www.blackhat.com/docs/us-15/materials/us-15-Zhang-Fingerprints-On-Mobile-Devices-Abusing-And-Leaking-wp.pdf

[4] Biggs, J.:”HTC Is Now Essentially Worthless (And Insecure)”, (2015/8/10), TechCrunch, http://techcrunch.com/2015/08/10/htc-is-now-essentially-worthless-and-insecure/?ncid=rss&utm_medium=twitter&utm_source=twitterfeed

Nat Sakimuraお盆だから安保法案と憲法改正について考えてみた [Technorati links]

August 10, 2015 12:52 PM

永江氏の「なぜ安保法案の容認派はデモに不快感を覚えるのかということと、安保法案の代替案について」[1]という文を読んだ。

堀氏や堀江氏ら安保法案容認派がデモになぜ強い嫌悪感を抱くかということの永江氏による分析で、あたっている気がする。一読の価値ありだ。

一方で、永江氏は安保反対派への戦術的な提案としては、

1 国会へではなく、世界中の中国大使館に圧力をかける
(中国が暴挙を止めれば安保法案の必要性は激減)

2 火力発電止めて全部原子力発電でいくことを主張する
(エネルギー安全保障の観点[2]。)

の2点を上げている。

実際、「反対」するときには対案を出さないとねと思うので、この案には賛成だ。

あともう一つ私から推進派に注文。

憲法改正と正面から向き合うべきだと思う。憲法改正が大変そうだからって、違憲な法律をゴニョゴニョつくろうとするのは、立憲主義の否定であり、近代の否定、中世へ逆戻りだ。それよりは堂々と9条廃止するほうが比較にならないほどマシである。

自民党憲法改正草案[3]は笑っちゃうけど、だからといって現在の日本国憲法が良いとは僕もちっとも思っていない。

まずは日本国憲法第26条、27条、30条から、国民の義務規定を削除するところあたりから始めてみてはどうだろうか。憲法は国民が国家権力に対して命じるものであって、国民の義務なんぞが書いてあるのはそもそもおかしい。ところが、現行憲法では、

第二十六条 すべて国民は、法律の定めるところにより、その能力に応じて、ひとしく教育を受ける権利を有する。すべて国民は、法律の定めるところにより、その保護する子女に普通教育を受けさせる義務を負ふ。義務教育は、これを無償とする。

第二十七条 すべて国民は勤労の権利を有し、義務を負ふ。賃金、就業時間、休息その他の勤労条件に関する基準は、法律でこれを定める。児童は、これを酷使してはならない。

第三十条 国民は、法律の定めるところにより、納税の義務を負ふ

のように国民の義務を規定している。ちなみに、対応するGHQ草案には国民の義務は無い。現行憲法の26条、27条に対応するのはArticle XXIV (24条)だが、以下のようになっている。

GHQ草案

Article XXIV. In all spheres of life, laws shall be designed for the promotion and extension of social welfare, and of freedom, justice and democracy.Free, universal and compulsory education shall be established.The exploitation of children shall be prohibited.The public health shall be promoted.Social security shall be provided.Standards for working conditions, wages and hours shall be fixed.

(私訳)あらゆる生活面において、政府は社会福祉、自由および正義ならびに民主主義を向上するべく法律を制定しなければならない。政府は無料かつ普遍的な強制的教育を提供しなければならない。政府は児童の搾取を禁じなければならない。政府は公共衛生を推進しなければならない。政府は社会保障を提供しなければならない。政府は労働環境、賃金、労働時間の基準を定めなければならない。

教育と労働が一緒の条文になっているのはどうかと思うが、一貫して政府の義務を書いており、国民の義務は書いていないのは、まさにそうあるべきと言えよう。

なので、憲法26条、27条、30条をたとえば:

第二十六条 すべて国民は、法律の定めるところにより、その能力に応じて、ひとしく教育を受ける権利を有する。政府は無償かつ普遍的な強制的教育を提供しなければならない。すべて国民は、法律の定めるところにより、その保護する子女に普通教育を受けさせる義務を負ふ。義務教育は、これを無償とする。[4]

第二十七条 すべて国民は、勤労の権利を有する。し、義務を負ふ。政府は、賃金、就業時間、休息その他の勤労条件に関する基準は、法律でこれを定めるを法律で定めなければならない。政府は、児童の搾取を禁ずる法律を制定しなければならないは、これを酷使してはならない

第三十条 国民は、法律の定めるところにより、納税の義務を負ふ。(削除)

のように改正する案を出してみれるのが良いのではないかと思う。これくらいだったら、通るんでは無いですかねぇ。これで通ったら、憲法改正のやり方と前例ができるから、第9条も改正にトライすれば良いじゃないですか。

逆に野党と国民の皆様へ。第96条の憲法改正要件を緩めようというような変な主張を蹴散らすためにも、ちゃんとこれくらいは通さなきゃダメですよ。

[1] 永江一石『なぜ安保法案の容認派はデモに不快感を覚えるのかということと、安保法案の代替案について』 (2015/8/9)

[2] まぁ、全部は無理としても、依存率は下げられるので、ホルムズ海峡の武力による通行の維持は今ほど重要でなくなる。

[3] 自由民主党『日本国憲法改正草案』(2012/4/27)舛添要一「憲法の基本を知らない国会議員たち」他の言うように、憲法が根本的に分かってない。

[4] 義務教育を国民の義務として書かなければ、学校に行かせない親や行かないこどもが出るではないかという人が出てくると思うので先手をとって。そんなのは、普通の法律に書けば良いことであって、憲法に書くことじゃないです。

※ アイキャッチ画像は、はてなココ氏の作品です。CC3.0-BYで提供されています。元画像のリンクはこちらです:http://free-illustrations.gatag.net/2014/01/01/120000.html

Mike Jones - MicrosoftJWS Unencoded Payload Option specification [Technorati links]

August 10, 2015 04:10 AM

IETF logoThe former JWS Signing Input Options specification has been renamed to JWS Unencoded Payload Option to reflect that there is now only one JWS Signing Input option defined in the spec – the “b64″:false option. The “sph” option was removed by popular demand. I also added a section on unencoded payload content restrictions and an example using the JWS JSON Serialization.

The specification is available at:

An HTML formatted version is also available at:

August 09, 2015

Matthew Gertner - AllPeersWant to Save on travel? Check Out Groupon Coupons! [Technorati links]

August 09, 2015 04:20 PM

If you like to travel, then you’ve come to the right blog post because we’ve got some great news!

The biggest deal-a-day coupon provider, Groupon, has now entered the internet coupon niche and they’re going big.

They have utilized their resources and created strategic alliances with more than 9,000 different stores, services and retailers. What does this mean? Well, how about more than 70,000 active coupons! The coolest part about it all is that you don’t need to sign up for anything on the Groupon website to have access to all the coupons, you just need to “show up”!

groupon travel coupons

Groupon knows that travel is one of the biggest niches people want to save on, so they’ve partnered up with such brands like Hotwire, Orbitz, Travelocity, Cheap o Air, Fox Rent a Car and Hotels.com just to name a few of he companies they’re working with.

All you need to do to save is go to the Groupon Coupons website and start exploring!

Bon Voyage.

The post Want to Save on travel? Check Out Groupon Coupons! appeared first on All Peers.

Matthew Gertner - AllPeersFirst Date Tips for Shy Guys [Technorati links]

August 09, 2015 02:56 PM

For most shy guys dating tends to be pretty stressful and exciting, and this excitement is doubled when it comes to the first date. Indeed, anticipation of that very first date you’ve been dreaming about causes a surge of emotions and nervousness is undoubtedly one of the strongest. If you are about to go out with a girl you like, you need to recover temper, as diffidence may spoil all the fun.

Here are a few first date tips by EasyDateNow for shy guys that will help make a good impression on a girl.

640px-Couple_01

Keep the Date Short

Since, people are always nervous when going out for the first time, a short date is the best option for your and your partner. Thus, rule number one – keep the first date short and the girl will have no time to notice your excitement. A few hours is usually more than enough to know each better and find a reason to go for another date.

Where to Go

So, your task number two is to choose the right place for your first date. Remember, that the first date should be short, so when deciding on the right place keep that rule in mind. It is also should be a quiet place, where you could talk. For example, you can take a girl for lunch, dinner or coffee. A good idea is to go to a museum, where you can talk about different things while walking through the exhibits. An amusement park is also a great choice, because you are bound to spend an unforgettable time there.

What to Talk About

Never talk about your last relationship on your first date! First date is also not the best time to talk about your bad habits or reveal all your dark secrets. Concentrate on the person next to you and keep the conversation casual. The most universal topics for conversation are: your interests and hobbies, jobs, books, movies, etc. And finally, always mind your manners! As you are dating a lady.

Be Ready to Pay

Though many girls nowadays do not mind paying for themselves, it would be better if you cover all the expenses. Of course, there will be lots of relationships gurus, who will recommend you not to pay, but if you want to impress a girl, show your generosity by paying for a dare. Your lady will definitely appreciate this mannish deed.

Make a New Date

If everything goes smoothly and you like the girl, do not be afraid to ask her for another date. You will definitely feel whether or not the girls will want to meet you again, so if everything seems OK, do not be shy and ask her out again.

Hopefully these simple tips will help you enjoy your first date!

The post First Date Tips for Shy Guys appeared first on All Peers.

August 07, 2015

Vittorio Bertocci - MicrosoftADAL Diagnostics [Technorati links]

August 07, 2015 08:15 PM

61941008

Every time somebody needs help troubleshooting an app using ADAL, one of the first things we ask is to provide ADAL logs (and possibly a Fiddler trace as well).

I usually have to write something like “you can find instructions on how to capture ADAL .NET logs in the Diagnostics section of https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/blob/master/README.md”, which is more typing work than I’d like (and occasionally people still can’t find the right section). And – there’s a different readme repo for each ADAL flavor. Hence, I am pasting here only that section from  commonly used ADALs (.NET, Android, iOS) and plan to send a link to  this post instead Smile there you go:

Diagnostics for ADAL .NET

The following are the primary sources of information for diagnosing issues:

Also, note that correlation IDs are central to the diagnostics in the library. You can set your correlation IDs on a per request basis (by setting CorrelationId property on AuthenticationContext before calling an acquire token method) if you want to correlate an ADAL request with other operations in your code. If you don’t set a correlations id, then ADAL will generate a random one which changes on each request. All log messages and network calls will be stamped with the correlation id.

Exceptions

This is obviously the first diagnostic. We try to provide helpful error messages. If you find one that is not helpful please file an issue and let us know. Please also provide the target platform of your application (e.g. Desktop, Windows Store, Windows Phone).

Logs

You can configure the library to generate log messages that you can use to help diagnose issues. You configure logging by setting properties of the static class AdalTrace; however, depending on the platform, logging methods and the properties of this class differ. Here is how logging works on each platform:

Desktop Applications

ADAL.NET for desktop applications by default logs via System.Diagnostics.Trace class. You can add a trace listener to receive those logs. You can also control tracing using this method (e.g. change trace level or turn it off) using AdalTrace.LegacyTraceSwitch.

The following example shows how to add a Console based listener and set trace level to Information (the default trace level is Verbose):

Trace.Listeners.Add(new ConsoleTraceListener());
AdalTrace.LegacyTraceSwitch.Level = TraceLevel.Info;

You can achieve the same result by adding the following lines to your application’s config file:

  <system.diagnostics>
    <sharedListeners>
      <add name="console" 
        type="System.Diagnostics.ConsoleTraceListener" 
        initializeData="false"/>
    </sharedListeners>
    <trace autoflush="true">
      <listeners>
        <add name="console" />
      </listeners>
    </trace>    
    <switches>
      <add name="ADALLegacySwitch" value="Info"/>
    </switches>
  </system.diagnostics>

If you would like to have more control over how tracing is done in ADAL, you can add a TraceListener to ADAL’s dedicated TraceSource with name “Microsoft.IdentityModel.Clients.ActiveDirectory”.

The following example shows how to write ADAL’s traces to a text file using this method:

Stream logFile = File.Create("logFile.txt");
AdalTrace.TraceSource.Listeners.Add(new TextWriterTraceListener(logFile));
AdalTrace.TraceSource.Switch.Level = SourceLevels.Information;

You can achieve the same result by adding the following lines to your application’s config file:

  <system.diagnostics>
    <trace autoflush="true"/>
    <sources>
      <source name="Microsoft.IdentityModel.Clients.ActiveDirectory" 
        switchName="sourceSwitch" 
        switchType="System.Diagnostics.SourceSwitch">
        <listeners>
          <add name="textListener" 
            type="System.Diagnostics.TextWriterTraceListener" 
            initializeData="logFile.txt"/>
          <remove name="Default" />
        </listeners>
      </source>
    </sources>    
    <switches>
      <add name="sourceSwitch" value="Information"/>
    </switches>
  </system.diagnostics>

Windows Store and Windows Phone Applications

Tracing in ADAL for Windows Store and Windows Phone is done via an instance of class System.Diagnostics.Tracing.EventSource with name “Microsoft.IdentityModel.Clients.ActiveDirectory”. You can define your own EventListener, connect it to the event source and set your desired trace level. Here is an example:

var eventListener = new SampleEventListener();

class SampleEventListener : EventListener
{
    protected override void OnEventSourceCreated(EventSource eventSource)
    {
        if (eventSource.Name == "Microsoft.IdentityModel.Clients.ActiveDirectory")
        {
            this.EnableEvents(eventSource, EventLevel.Verbose);
        }
    }

    protected override void OnEventWritten(EventWrittenEventArgs eventData)
    {
        ...
    }
}

There is also a default event listener which writes logs to a local file named “AdalTraces.log”. You can control the level of tracing to that event listener using the property AdalTrace.Level. By default, trace level for this event listener is set to “None” and to enable tracing to this particular listener, you need to set the above property. This is an example:

AdalTrace.Level = AdalTraceLevel.Informational;
Windows Phone Silverlight Applications

Since Silverlight does not support EventSource/EventListener, we use LoggingChannel/LoginSession for logging. There is a LoggingChannel in AdalTrace which you can connect your own LoggingSession/FileLoggingSession to it and also control trace level using it. Here is an example:

LoggingSession loggingSession = new LoggingSession("ADAL Logging Session");
loggingSession.AddLoggingChannel(AdalTrace.AdalLoggingChannel, LoggingLevel.Verbose);

and then use loggingSession.SaveToFileAsync(...) to copy the logs to a file. If you use emulator, you can then use ISETool.exe and tracerpt.exe tools to copy log file and convert it to text format.

Network Traces

You can use various tools to capture the HTTP traffic that ADAL generates. This is most useful if you are familiar with the OAuth protocol or if you need to provide diagnostic information to Microsoft or other support channels.

Fiddler is the easiest HTTP tracing tool. In order to be useful it is necessary to configure fiddler to record unencrypted SSL traffic.

NOTE: Traces generated in this way may contain highly privileged information such as access tokens, usernames and passwords. If you are using production accounts, do not share these traces with 3rd parties. If you need to supply a trace to someone in order to get support, reproduce the issue with a temporary account with usernames and passwords that you don’t mind sharing.

=================================

Diagnostics for ADAL Android

The following are the primary sources of information for diagnosing issues:

Also, note that correlation IDs are central to the diagnostics in the library. You can set your correlation IDs on a per request basis if you want to correlate an ADAL request with other operations in your code. If you don’t set a correlations id then ADAL will generate a random one and all log messages and network calls will be stamped with the correlation id. The self generated id changes on each request.

Exceptions

This is obviously the first diagnostic. We try to provide helpful error messages. If you find one that is not helpful please file an issue and let us know. Please also provide device information such as model and SDK#.

Logs

You can configure the library to generate log messages that you can use to help diagnose issues. You configure logging by making the following call to configure a callback that ADAL will use to hand off each log message as it is generated.

 Logger.getInstance().setExternalLogger(new ILogger() {
     @Override
     public void Log(String tag, String message, String additionalMessage, LogLevel level, ADALError errorCode) {
      ...
      // You can write this to logfile depending on level or errorcode.
      writeToLogFile(getApplicationContext(), tag +":" + message + "-" + additionalMessage);
     }
 }

Messages can be written to a custom log file as seen below. Unfortunately, there is no standard way of getting logs from a device. There are some services that can help you with this. You can also invent your own, such as sending the file to a server.

private syncronized void writeToLogFile(Context ctx, String msg) {      
       File directory = ctx.getDir(ctx.getPackageName(), Context.MODE_PRIVATE);
       File logFile = new File(directory, "logfile");
       FileOutputStream outputStream = new FileOutputStream(logFile, true);
       OutputStreamWriter osw = new OutputStreamWriter(outputStream);
       osw.write(msg);
       osw.flush();
       osw.close(); 
}
Logging Levels

You set the log level like this:

Logger.getInstance().setLogLevel(Logger.LogLevel.Verbose);

All log messages are sent to logcat in addition to any custom log callbacks. You can get log to a file form logcat as shown belog:

  adb logcat > "C:\logmsg\logfile.txt"

More examples about adb cmds: https://developer.android.com/tools/debugging/debugging-log.html#startingLogcat

Network Traces

You can use various tools to capture the HTTP traffic that ADAL generates. This is most useful if you are familiar with the OAuth protocol or if you need to provide diagnostic information to Microsoft or other support channels.

Fiddler is the easiest HTTP tracing tool. Use the following links to setup it up to correctly record ADAL network traffic. In order to be useful it is necessary to configure fiddler, or any other tool such as Charles, to record unencrypted SSL traffic. NOTE: Traces generated in this way may contain highly privileged information such as access tokens, usernames and passwords. If you are using production accounts, do not share these traces with 3rd parties. If you need to supply a trace to someone in order to get support, reproduce the issue with a temporary account with usernames and passwords that you don’t mind sharing.

=================================

Diagnostics for ADAL iOS

The following are the primary sources of information for diagnosing issues:

Also, note that correlation IDs are central to the diagnostics in the library. You can set your correlation IDs on a per request basis if you want to correlate an ADAL request with other operations in your code. If you don’t set a correlations id then ADAL will generate a random one and all log messages and network calls will be stamped with the correlation id. The self generated id changes on each request.

NSError

This is obviously the first diagnostic. We try to provide helpful error messages. If you find one that is not helpful please file an issue and let us know. Please also provide device information such as model and SDK#. The error message is returned as a part of the ADAuthenticationResult where the status is set to AD_FAILED.

Logs

You can configure the library to generate log messages that you can use to help diagnose issues. ADAL uses NSLog by default to log the messages. Each API method call is decorated with API version and every other message is decorated with correlation id and UTC timestamp. This data is important to look of server side diagnostics. SDK also exposes the ability to provide a custom Logger callback as follows.

    [ADLogger setLogCallBack:^(ADAL_LOG_LEVEL logLevel, NSString *message, NSString *additionalInformation, NSInteger errorCode) {
        //HANDLE LOG MESSAGE HERE
    }]
Logging Levels

You set the log level like this:

[ADLogger setLevel:ADAL_LOG_LEVEL_INFO]

Network Traces

You can use various tools to capture the HTTP traffic that ADAL generates. This is most useful if you are familiar with the OAuth protocol or if you need to provide diagnostic information to Microsoft or other support channels.

Charles is the easiest HTTP tracing tool in OSX. Use the following links to setup it up to correctly record ADAL network traffic. In order to be useful it is necessary to configure Charles, to record unencrypted SSL traffic. NOTE: Traces generated in this way may contain highly privileged information such as access tokens, usernames and passwords. If you are using production accounts, do not share these traces with 3rd parties. If you need to supply a trace to someone in order to get support, reproduce the issue with a temporary account with usernames and passwords that you don’t mind sharing.

WAYF NewsSIMAC now providing identities to WAYF [Technorati links]

August 07, 2015 12:31 PM

Svendborg International Maritime Academy – SIMAC joined WAYF as an identity provider at the beginning of this August. Consequently, Academy students and staff now have the ability to access WAYF-enabled web services using their SIMAC user accounts.

August 06, 2015

CourionInternships: Risk vs. Reward [Technorati links]

August 06, 2015 02:38 PM

Access Risk Management Blog | Courion

Internship Statistic

By now, you’ve surely seen the signs, the sales, and the sad faces that signal the start of a new school year. While this may mean the end of summer as you know it, it also means the end of hundreds of thousands of summer internships. Did you know that 84% of college students plan on completing an internship before graduating? This means that – more than likely – you will have your fair share of interns coming and going from your organization each year. 

Don’t get me wrong; interns are great! They no longer serve just to grab your morning coffee. Interns today are integral members of your team and bring a fresh perspective, not to mention extra brainpower, to your projects. However, just as with all types of employees, they also bring their own set of risks, and you need to be prepared.

Privileged Access

It’s hard enough to know, even as a new full-time employee, what applications you need to access. Imagine being an intern and wondering what these applications are, what they do, and which ones you need. The task is daunting to say the least. The key to helping new interns, and all new employees, with understanding what applications they need can be solved by having an IAM solution that will guide them through the provisioning phase.

With an intelligent IAM solution, your new interns will be guided through the system and will be shown applications that they have been pre-approved for based on their role. If these interns need more privileged access based on their projects, they can request that access and a request will be sent to their manager for approval. With an intelligent provisioning solution, you save your interns time by showing them what applications they need while you cut down on the risk of privileged access from interns being granted access to critical applications.

Millennial/Creative Risk

I am not a millennial, but I do understand their attraction to the newest and best of everything. Who doesn’t want to be up-to-date on the newest trends? For example, do you know what kik, snapchat, yikyak, and listicle are? Neither did I, until our newest marketing intern taught us all about these new and innovative social media platforms. While interns are bringing in fresh knowledge and new applications for your company to take advantage of, you need to be aware of the risks they pose. Just like with BYOD risks, opening up your network to new social media sites, content applications, or other software can leave it vulnerable to attacks.

In order to make sure that you’re getting the best of both worlds, new information and a secure connection, make sure that you instill in your newest team members a culture of 

Summer Interns

security. Through training videos, in-person demonstrations, and/or an ongoing culture of security in your organization you will make them aware of practices such as not downloading anything without prior approval, checking with IT for your BYOD devices, and more. Not only will your organization profit from building your internal security team but you will be imparting a vital career skill into each of your interns.

End-of-Session Threats

Hopefully, by the end of their session you have turned your unseasoned interns into experienced professionals. What is the easiest way to make sure your intern’s access is terminated? You guessed it, an intelligent IAM system. The same system that provisions access for your team will make sure to monitor it for orphaned or misused accounts. This way you will receive an alert if your intern is accessing applications outside of their role or after an extended period of being unused. Either of these instances will alert you to either your intern, or a hacker, breaching your system and will alert you to the orphaned or hacked account. experienced professionals. Now as you say goodbye and send them back to school, make sure that you’re saying goodbye to their user accesses as well. Just as with any employee that leaves the company, your interns’ access rights also need to be terminated. Orphaned accounts are a major liability to your system and can be an easy target for hackers. Occasionally, not that any of your interns would do this, some ex-interns log back onto the system after their program is over and steal information. Terminating their access rights before they have a chance to log back in is the safest way to prevent file theft.

Interns reviewing security policies

Did I scare you away from the possibility of bringing in your fall interns? I hope not. As I said before, interns are great and can be hugely beneficial for your organization. These team members can be an integral part of your organization and should be accepted as such. However, keep in mind that they have their own inherent risks and need to be treated with the same security protocols as any other members of your team. Make sure you are building more than just interns; build strong, security-aware team members that will continue to excel long after they’ve finished their program.

blog.courion.com

August 05, 2015

Matthew Gertner - AllPeersWhat to know about buying a steam shower! [Technorati links]

August 05, 2015 05:06 PM

You’ve been to expensive spas or indulged in the high-end shower facilities at your gym, but what if you want that same luxury in the comfort of your own home? Well, it can be yours! The home spa market has grown tremendously in recent  years and there are countless options for custom and prefab steam shower units that you can have installed right in your bathroom. Check out this post to learn a few tips on how to go about getting your own little slice of heaven in your own home.

steam shower11

1. Is buying a steam shower the right move for you?

Before you make the decision, you need to explore all of the options. Keep in mind that installing a steam shower isn’t just about ordering one on Amazon and installing it yourself. You need to have a professional install your unit and there might be additional costs to upgrade your bathroom to be able to handle all the steam it will endure, which a normal bathroom would quickly deteriorate if not treated correctly.

2. Prefab or Custom?

You must decide if a prefab unit like the Insignia Steam Shower or a custom steam shower is the right choice for you. Measurements in your bathroom must be taken and you have to look at your budget if there are certain features you want that might not be available in premade units. There is a wide range of options in the prefab market from super budget to super luxury with every bell and whistle you can imagine. But, if a certain look is what you’re after or you have an awkward space, a custom steam shower might be the way to go.

3. How to find the right company to install a steam shower?

Installing a steam shower isn’t an easy task and a normal plumber might not be able to handle it. You’ll want to do your research and ask for references from different steam shower installing companies. If you go custom or prefab, there is still a lot of electronics involved and with the amount of moisture that is created by a steam shower, you’ll want to make sure things are property insulated, covered, etc. A good idea is to speak with an interior designer who might have a good relationship with a quality bathroom remodeler.

Well, there you have it! Just a few things to consider when buying a steam shower, but regardless of which way you go, if you decided to install one, you’ll surely be living the good life, so just do your research and enjoy!

The post What to know about buying a steam shower! appeared first on All Peers.

Matthew Gertner - AllPeers10 Things To Help You Beat A Case Of The Mondays! [Technorati links]

August 05, 2015 04:15 PM

Summer is supposed to be a fun time, but when you are an adult and have to have a job in order to pay your bills and keep a roof over your head, summer loses a lot of its charm. You still want to have fun but responsibilities come first.

All of this work time can lead to a case of the Mondays when the beginning of the week rolls back around. Don’t let Monday get you down; instead come up with something fun that you can do in order to brighten your spirits and fight off the gloom.

There are a lot of things you can do to beat the blahs, but here are ten you may not have thought of, and some that might be at the top of your list. You’ll find that simply getting out and moving can make a huge difference.

2219889418_ddc0c30c8c_z

  1. Hang out at the beach/pool — Nothing beats the heat and hectic living of summer better than spending some time in the lake or at the pool. Even if you don’t feel like swimming a nice relaxing time can be had just laying out on the beach.
  2. Have A Bonfire — Don’t feel like leaving home after a long day of work, but want to shake off Monday and enjoy the great outdoors? Have a bonfire in your yard, enjoy a beer and some s’mores, and maybe even invite a friend or two.
  3. Take A Jog or A Scenic Walk — Sometimes getting in a little workout can help lift up your spirits, and summer is the perfect time to get outside. Even if you don’t have a lot of time, a quick job can ease the nerves. If you have time, go on an awe inspiring hike.
  4. Go See A Movie — Summer is a great time to catch the newest blockbuster films, like Ant Man. Why not use a night at the movies to cure your case of the Mondays?
  5. Adopt A pet — Summer is a great time to adopt a new family member. People with pets are often happier and healthier than those without. If you spend a lot of time away from home you may want a cat, but if you have free time for training and lots of attention a dog can be a great choice. They are proven to increase your wellness and mental health too!
  6. Do Some Gardening — Gardening is a great way to beat the Monday blues. You get time outside and you also get to feel like you’ve accomplished something the next time you cook a meal with stuff you grew in your own garden.
  7. Enjoy A Hookah — If you ask some people why they smoke, it’s because smoking helps keep them calm and relaxed. Hookahs are much healthier than smoking cigarettes, and not illegal in most places like pot. There may be a hookah bar near you, or you can order one of your own hookahs and enjoy smoking from home.
  8. Grab a drink from a local brewery — It’s not good to use drugs or alcohol to cure your problems, but that doesn’t mean that having a drink at a local brewery is a bad thing. Your supporting local brewers and that beer just might be exactly what your Monday needed.
  9. Play some Pool or Go Bowling — If you’re the sporty type, or just haven’t done either since childhood, playing a game of bowling or shooting some pool are both great ways to de-stress.
  10. Eat a meal fit for a king from a restaurant — A nice meal you didn’t have to cook yourself can be a great finish to a not so great Monday. Go all out and enjoy every bite. Take that Monday!

The post 10 Things To Help You Beat A Case Of The Mondays! appeared first on All Peers.

ForgeRockThe Real Reasons Customers Choose ForgeRock [Technorati links]

August 05, 2015 03:13 PM

What we hear from our customers shapes everything we do at ForgeRock, from how we build products to how we support our customers.

 

So recently, we commissioned research firm TechValidate to survey our customers, so we could learn how they use ForgeRock products to hit their business goals. We’ve got the results on display in a cool infographic here.

 

forgerock_techvalidate_infographic

 

Our major takeaways? Our customers are creating successful digital businesses, with the help of our reliable, scalable ForgeRock Identity Platform.

 

You can check out more of our results, from stats to quotes, here.

 

And, if you haven’t taken the survey yet, chime in here. We’d love to get your unvarnished opinions. We’re already taking action on some of the suggestions customers shared with us via the survey, so we can give you the best possible experience with our products.

The post The Real Reasons Customers Choose ForgeRock appeared first on Home - ForgeRock.com.

KatasoftThe end of PHP 5.3 [Technorati links]

August 05, 2015 05:00 AM

PHP 5.3 End of Life Support

Programming languages always progress and change. Bugs are found and patched, and so are security holes in the language. PHP Group and the PHP Community has always prided itself in making sure developers have the best and most secure code available. Because of this, PHP – like many languages – will End Of Life (EOL) an older version, no longer maintaining them for bug and security updates for that version.

In the next week, you will notice a new release of the PHP SDK that requires you to update your version of PHP to at least 5.4-stable. After joining Stormpath full time last week, it’s been my top priority to ensure the SDK is on track and up to the standards of our other SDKs. Its critical to ensure that our SDKs have the best security support available, and we hope this helps add support new features that were not available in PHP version 5.3.

From PHP 5.3 to PHP 5.4

The plan to remove support for PHP 5.3 has been in the works since earlier in 2015. There were some very good additions to this version of PHP: Namespaces, closures, and PHAR. These HUGE additions represented huge progress for PHP when they were made back in mid 2009.

Fast forward six years: as the language progressed into 5.4, more and more great additions were added to PHP. We could not use them, because we were still supporting an older version.

What the 5.3 End of Life Mean for the PHP SDK?

If you are still using PHP 5.3 you will not be able to upgrade the Stormpath PHP SDK package past 1.6.0-beta, as we will be implementing some features that are not available in PHP 5.3 We understand this can be a major pain for our customers, but we hope for the security of your users that you will plan an upgrade to 5.4.

We will be releasing a major update to the SDK in the coming weeks to include these features, and end 5.3 support.

If you have any questions or issues during regarding the SDK, please let us know. Contact us at support@stormpath.com and we will be happy to help you.

The Roadmap for PHP Support at Stormpath

For full transparency into our plans for minimum PHP version support, here is a chart that shows the roadmap of PHP version support in the Stormpath SDK:

PHP Release Timeline

Going forward, we will follow closely with the EOL support provided by the PHP community. Once a version of the language is EOL from PHP, we will also release a new version of the SDK that requires you to upgrade your version to at least the lowest supported version of PHP.

We always suggest that you keep your PHP version up to the latest stable version, for your security and that of your customers. At the time of this writing would be version 5.6, with PHP 7 right around the corner.

Stormpath is committed to making this transition as easy as possible and would like to remind you again that you can always contact support@stormpath.com for any questions or issues you may have during your upgrade of PHP.

August 04, 2015

CourionDigital & physical security plus a cameo from Ms. Austen- Its #TechTuesday [Technorati links]

August 04, 2015 12:03 PM

Access Risk Management Blog | Courion

blog.courion.com

Mark Dixon - OracleEducational Resources for Space [Technorati links]

August 04, 2015 01:36 AM

EducatorLabs

Recently, I received some fun suggestions from Jasmine Dyoco from EducatorLabs via the Feedback page on this site. Intrigued by some of the Space Travel posts on this blog, she suggested a number of great links to educational sites related to Space and science:

I was impressed by the Vision of EducatorLabs:

EducatorLabs is comprised of school librarians and media/market research specialists who work as curators and conservators of the scholastic web. In previous decades, our resource collections were finite and we knew our card catalog backwards and forwards; nowadays, modern technology provides us with a seemingly infinite inventory of educational resources. Unfortunately, there simply are no comprehensive card catalogs for the internet and, sadly, many untapped resources go undiscovered by most teachers.

Naturally, we feel compelled to bridge the gap. Our mission is to assist educators, for whom time is a precious commodity, in discovering valuable resources of substance for classroom use. We also seek to strengthen connections among the educational web by acting as courier: because of our high standards, our approach is grassroots and hands-on in nature.

As a father of six children, all of whom graduated from public schools in Mesa, AZ, I have deep respect for dedicated educators who go above and beyond their “job descriptions” to offer students outstanding educational experience. And now, as my grandchildren are growing up, I am so grateful for teachers and schools that are willing to go the extra mile to help young minds learn and grow and spread their wings of discovery!

Thank you, Jasmine!

Mark Dixon - OracleThe Scraping Threat Report 2015 [Technorati links]

August 04, 2015 12:33 AM

Scraping

Back in May, I wrote a couple of posts about Illicit Internet bots:

I recently read a short, but interesting report on “Scraping,” a process of using bots and similar tools to steal information. The Scraping Threat Report 2015  published by ScrapeSentry. This reports includes this definition:

Scraping (also known as web scraping, screen scraping or data scraping) is when large amounts of data from a web site is copied manually or with a script or program. Malicious scraping is the systematic theft of intellectual property in the form of data accessible on a web site.

This theft of intellectual property can be very damaging to businesses. If, for example, a scraper can download airline fares from a legitimate site through illicit means, the stolen data can be exploited to fuel unfair business practices.

Some interesting statistics:

Scrapers are generally categorized into the following areas:

In short, if you are an Internet user, these scrapers are generating so much traffic that they are undoubtedly impacting the performance of websites you visit. If you are website operator and your website contains any type of information that could exploited for nefarious purposes, scrapers probably have already penetrated your defenses or at least have you in their bomb sights.

August 03, 2015

Mark Dixon - OracleCoolest Travel Voucher I’ve Seen! [Technorati links]

August 03, 2015 07:45 PM

Submitting expense reports is one of the seemingly never-ending exercises I have had to endure in over three decades of professional travel. But last week I saw a copy of the coolest travel expense report I have ever seen.

Col. Buzz Aldrin submitted an expense report requesting reimbursement for $33.31 to cover personal expenses for his Apollo 11 trip to the moon!

Enjoy!

TravelVoucher

 

TravelVoucher2

Julian BondSome satirical humour. [Technorati links]

August 03, 2015 08:41 AM
Some satirical humour.

http://www.mpoweruk.com/coal.htm
In view of the acute crisis caused by the threat of exhaustion of uranium and thorium, the Editors thought it advisable to give the new information contained in the article the widest possible distribution.

One wonders what Otto Frisch would have made of oil, gas and lignite as fuels for power stations. Or Solar Thermal.
 Feasibility of Coal-Driven Power Stations »
The following article is reprinted from the Yearbook of the Royal Institute for the Utilisation of Energy Resources for the Year MMMMCMLV, p1001. In view of the acute crisis caused by the threat of exhaustion of uranium and thorium, the Editors thought it advisable to give the new information ...

[from: Google+ Posts]
July 31, 2015

Matthew Gertner - AllPeersOptions for accommodation in Barcelona [Technorati links]

July 31, 2015 12:39 AM

photo by CC user Mattia Felice Palermo on wikimedia

Heading to one of Spain’s most culturally rich cities soon, but have no idea where to stay? There are many different options for accommodation in Barcelona that will adequately meet your needs – you just need to know what kind of person you are to make a wise decision.

Let’s break down each category of lodging below…

1) Stay in a hotel

Of all the accommodation options open to you in Barcelona, staying in a hotel is by far one of the most popular ways to spend a holiday in one of Europe’s most stylish cities.

If you’ve got cash, the Mercer Hotel Barcelona is the only way to roll, as its exquisite amenities, concierge services and attentive staff will deliver value well beyond what you will pay for your room.

For those on a tighter budget, economical offerings like Hesperia Sant Joan will provide you with clean and comfortable surrounds, while occasionally having some pleasant surprises in store for you (like a pool and kitchenette suites in the case of Hesperia Sant Joan).

2) Rent a holiday apartment

As nice as hotels can be, they often lack privacy and a feeling of being at home. If you are seeking these two qualities in a place to stay in Barcelona, then renting a holiday apartment through providers such as House Trip will help you leave behind noisy neighbors and the sterile atmosphere that hotels often have.

Stylish living rooms, sunny terraces, and homely surroundings can be yours, all for less than the cost of many hotels in the Barcelona area. Be forewarned though: you may never want to go back to booking a room at a major chain ever again!

3) Save money and make foreign friends at one of many hostels

If you are on a longer term trip with a modest budget, staying at one of Barcelona’s trendy hostels might be the best option for you.

From the clean modern design of Sant Jordi Gracia, to the group Spanish and Italian dinners at Hostel One Paralelo, those looking to save money while in Barcelona needn’t sacrifice having a great trip in the process.

In fact, due to the social atmosphere often present in hostels, it may prove to be the superior choice for some people!

4) Connect with the locals via couchsurfing

Emerging in the past five to seven years with the rise of the sharing economy, Couchsurfing emerged from a desire to deep dive into the culture of a destination by staying with local residents.

These long time citizens will be able to show you secrets that your Lonely Planet won’t reveal (such as restaurants and bars where locals congregate), cook you regional specialties that you might not be able to find in restaurants in the center of town, and fill you in on the subtleties of Barcelonan culture in a way you’ll be able to understand.

The post Options for accommodation in Barcelona appeared first on All Peers.

July 30, 2015

Ian YipInvisible Identity [Technorati links]

July 30, 2015 01:31 PM
My Name Was Michael & The Rest Is History
Photo source: Michael Shaheen - My Name Was Michael & The Rest Is History
In my previous post, I promised to explain the following:
Organisations should care about identity so they can stop caring about it. Identity needs to disappear, but only from sight; it needs to be invisible.
If you've been to any of Disney's theme parks recently, you may have noticed they now have something called the MagicBand. It cost them a lot of money. Disney calls it "magic". The technology powering the MagicBand infrastructure was complicated to build, but they've done it and have the increased revenue to show for it. They've also managed to turn what is effectively a security device into a new revenue stream by making people pay for them, including charging a premium for versions that have Disney characters on them.

While it does many things, arguably the key benefit of the MagicBand is in delighting Disney's customers by providing seamless, friction-less, surprising experiences without being creepy. For example, when you walk up to a restaurant, you can be greeted by name. You will then be told to take a seat anywhere. Shortly after, your pre-ordered meal will be brought to you wherever you chose to sit, just like magic. If you understand technology, you can inherently figure out how this might work. But the key in all this is the trust that the consumer places in the company. Without the trust, Disney steps over the "creepy" line.

How does Disney ensure trust? Through security of course. Sure, the brand plays a part, but we've all lost trust in a supposedly trusted brand before because they screwed up their security.

The key pieces of that security? Identity proofing, authentication, access control and privacy, none of which is possible without a functional, secure identity layer.

Conveniently (for me), Ian Glazer recently delivered 2 presentations that go into a little more depth around the points I'd otherwise have to laboriously make:

  1. Stop treating your customers like your employees
  2. Identity is having its TCP/IP Moment
If you have some time, do yourself and favour and follow those links - you might just learn something :)

What Disney has managed to achieve within their closed walls is exactly what every organisation trying to do something with omni-channel and wearables would like to achieve. Disney is a poster child for what is possible through an identity-enabled platform, particularly in bringing value to the business through increased revenue and customer satisfaction. Identity truly is the enabler for Disney's MagicBand.

The reason it works is because no one notices the identity layer. Not every organisation will be able to achieve everything Disney has managed, but even going part of the way is worth the effort. Only by ensuring the identity layer is there, can you really make it invisible.

Until people stop noticing the identity layer, you need to keep working on it. Only then will the business see the full potential and value that identity brings to increasing revenue.