May 25, 2015

Radovan Semančík - nLightPax [Technorati links]

May 25, 2015 02:50 PM

My recent posts about ForgeRock attracted a lot of attention. The reactions filled the spectrum almost completely. I've seen agreement, disagreement, peaceful and heated reactions. Some people were expressing thanks, others were obviously quite upset. Some people seem to take it as an attack on ForgeRock. This was not my goal. I didn't want to harm ForgeRock or anyone else personally. All I wanted is to express my opinion about a software that I'm using and write down the story of our beginnings. But looking back I can understand that this kind of expression might be too radical. I haven't though about that. I'm an engineer, not a politician. Therefore I would like to apologize to all the people that I might have hurt. It was not intentional. I didn't want to declare a war or anything like that. If you have understood it like that, please take this note as an offer of peace.

A friend of mine gave me a very wise advice recently. What has happened is a history. What was done cannot be undone. So, let it be. And let's look into the future. After all, if it haven't been for all that history with Sun, Oracle and ForgeRock we probably would not have the courage to start midPoint as an independent project. Therefore I think I should be thankful for this. Do not look back, look ahead. And it looks like there are great things silently brewing under the lid ...

(Reposted from https://www.evolveum.com/pax/)

May 22, 2015

Mark Dixon - OracleBots Generate a Majority of Internet Traffic [Technorati links]

May 22, 2015 06:16 PM

Bot1

According to the 2015 Bad Bot Landscape report, published by Distil Networks, only 40% of Internet traffic is generated by humans! Good bots (e.g. Googlebot and Bingbot for search engines) account for 36% or traffic, while bad bots account for 23%.

Bad bots continue to place a huge tax on IT security and web infrastructure teams across the globe. The variety, volume and sophistication of today’s bots wreak havoc across online operations big and small. They’re the key culprits behind web scraping, brute force attacks, competitive data mining, brownouts, account hijacking, unauthorized vulnerability scans, spam, man-inthe- middle attacks, and click fraud.

These are just averages. It’s much worse for some big players.

Bad bots made up 78% of Amazon’s 2014 traffic, not a huge difference from 2013. VerizonBusiness really cleaned up its act, cutting its bad bot traffic by 54% in 2014.

It was surprising to me that the US is the largest source for bad bot traffic.

The United States, with thousands of cheap hosts, dominates the rankings in bad bot origination. Taken in isolation, absolute bad bot volume data can be somewhat misleading. Measuring bad bots per online user yields acountry’s “Bad Bot GDP.”

Using this latter “bad bots per online user” statistic, the nations of Singapore, Israel, Slovenia and Maldives are the biggest culprits.

The report contains more great information for those who are interested in bots. Enjoy!

Julian BondGin & Germain [Technorati links]

May 22, 2015 05:37 PM
Gin & Germain
Today's Friday Night Cocktail is the Gin & Germain. Except that it's not St Germain. I've been given a bottle of Chase Elderflower liqueur which is a 20% Chase vodka and Elderflower concoction which is basically the same. The Gin is Adnam's Copper House from our trip last week to the Suffolk coast. So without further ado,

50ml Adnams Copper House Gin
25ml Chase Elderflower
75ml Fevertree tonic
Over Ice, Collins glass, Lime garnish.

It's really just a slightly sweeter G&T with some extra flavours but just the thing for a muggy evening that's an hour away from a downpour.

http://barnotes.co/recipes/gin-st-germaine
http://adnams.co.uk/spirits/our-spirits/distilled-gin/
http://williamschase.co.uk/collections/all-products/products/chase-rhubarb-liqueur-20
[from: Google+ Posts]

MythicsPart 3 of 3 - ADO:  The Feet Hit The Street [Technorati links]

May 22, 2015 05:19 PM

In parts 1 and 2 of this series, we discussed the history of database compression, and the new features in Oracle 12c called…

May 21, 2015

Mark Dixon - OracleBig Day for Lindbergh and Earhart! [Technorati links]

May 21, 2015 11:43 PM

Today is the anniversary of two great events in aviation history.  On May 21, 1927, Charles Lindbergh landed in Paris, successfully completing the first solo, nonstop flight across the Atlantic ocean.  Five years later, on May 21, 1932, Amelia Earhart became the first pilot to repeat the feat, landing her plane in Ireland after flying across the North Atlantic.

Congratulations to these brave pioneers of the air!

LindbergEarhart

Both Lindberg’s Spirit of St. Louis and Earhart’s Lockheed Vega airplanes are now housed in the Smithsonian Air and Space Museum in Washington, DC.

Spirit St Louis 590

Lockheed Vega 5b Smithsonian

 

KatasoftEasy Unified Identity [Technorati links]

May 21, 2015 07:00 PM

Stormpath + OAuth Opengraph

Unified Identity is the holy grail of website authentication. Allowing your users to log into your website through any mechanism they want, while always having the same account details, provides a really smooth and convenient user experience.

Unfortunately, unified identity can be tricky to implement properly! How many times have you logged into a website with Google Login, for instance, then come back to the site later and created an account with email / password only to discover you now have two separate accounts! This happens to me all the time and is really frustrating.

In a perfect world, a user should be able to log into your website with:

And always have the same account / account data — regardless of how they choose to log in at any particular point in time.

Unified Identity Management

Over the past few months we’ve been collaborating with our good friends over at OAuth.io to build a unified identity management system that combines OAuth.io’s broad support for social login providers with Stormpath’s powerful user management, authorization and data security service.

Here’s how it works:

You’ll use OAuth.io’s service to connect your Google, Facebook, Twitter, or any other social login services.

You’ll then use Stormpath to store your user accounts and link them together to give you a single, unified identity for every user.

It’s really simple to do and works very well!

And because OAuth.io supports over 100 separate OAuth providers, you can allow your website visitors to log in with just about any service imaginable!

User Registration & Unification Demo

To see how it works, I’ve created a small demo app you can check out here: https://unified-identity-demo.herokuapp.com/

Everything is working in plain old Javascript — account registration, unified identity linking, etc.

Go ahead and give it a try! It looks something like this:

Unified Identity Demo

If you’d like to dig into the code yourself, or play around with the demo app on your own, you can visit our project repository on Github here: https://github.com/stormpath/unified-identity-demo

If you’re a Heroku user, you can even deploy it directly to your own account by clicking the button below!

Deploy

Configure Stormpath + OAuth.io Integration

Let’s take a look at how simple it is to add unified identity to your own web apps now.

Firstly, you’ll need to go and create an

Don’t worry — both are totally free to use.

Next, you’ll need to create a Stormpath Application.

Stormpath Create Application

Next, you’ll need to log into your OAuth.io Dashboard, visit the “Users Overview” tab, and enter your Stormpath Application name and credentials.

OAuth.io Stormpath Configuration

Finally, you need to visit the “Integrated APIs” tab in your OAuth.io Dashboard and add in your Google app, Facebook app, and Twitter app credentials. This makes it possible for OAuth.io to easily handle social login for your web app:

OAuth.io Social Configuration

Show Me the Code!

Now that we’ve got the setup stuff all ready to go, let’s take a look at some code.

The first thing you’ll need to do is activate the OAuth.io Javascript library in your HTML pages:

<script src="https://stormpath.com/static/js/oauth.min.js"></script>
<script>
  OAuth.initialize("YOUR_OAUTHIO_PUBLIC_KEY");
</script>

You’ll most likely want to include this at the bottom of the <head> section in your HTML page(s). This will initialize the OAuth.io library.

Next, in order to register a new user via email / password you can use the following HTML / Javascript snippet:

<form onsubmit="return register()">
  <input id="firstName", placeholder="First Name", required>
  <input id="lastName", placeholder="Last Name", required>
  <input id="email", placeholder="Email", required, type="email">
  <input id="password", placeholder="Password", required, type="password">
  <button type="submit"> Register
</form>
<script>
  function register() {
    User.signup({
      firstname: document.getElementById('firstName').value,
      lastname: document.getElementById('lastName').value,
      email: document.getElementById('email').value,
      password: document.getElementById('password').value
    }).done(function(user) {
      // Redirect the user to the dashboard if the registration was
      // successful.
      window.location = '/dashboard';
    }).fail(function(err) {
      alert(err);
    });
    return false;
  }
</script>

This will create a new user account for you, with the user account stored in Stormpath.

Now, in order to log a user in via a social provider (Google, Facebook, Twitter, etc.) — you can do something like this:

<script>
  function link(provider) {
    var user = User.getIdentity();
    OAuth.popup(provider).then(function(p) {
      return user.addProvider(p);
    }).done(function() {
      // User identity has been linked!
    });
  }
</script>
<button type="button" onclick="link('facebook')">Link Facebook</button>
<button type="button" onclick="link('google')">Link Google</button>
<button type="button" onclick="link('twitter')">Link Twitter</button>

If a user clicks any of the three defined buttons above, they’ll be prompted to log into their social account and accept permissions — and once they’ve done this, they’ll then have their social account ‘linked’ to their normal user account that was previously created.

Once a user’s account has been ‘linked’, the user can log into any of their accounts and will always have the same account profile returned.

Nice, right?!

Simple Identity Management

Hopefully this quick guide has shown you how easy it can be to build unified identity into your next web application. With Stormpath and OAuth.io, it can take just minutes to get a robust, secure, user authentication system up and running.

To dive in, check out the demo project on github: https://github.com/stormpath/unified-identity-demo

Happy Hacking!

May 20, 2015

Nat SakimuraJWSとJWTがRFCになりました! [Technorati links]

May 20, 2015 02:35 AM

ietf-logoずいぶん長くかかりましたが[1]、JSON Web Signature (JWS)とJSON Web Token (JWT) がようやく Standard Track の RFC[2]になりました。それぞれ、[RFC7515]と[RFC7519]です。

ご存じない方のために申し上げますと、JWSはJSONにデジタル署名するための規格です。XML署名のJSON版ですね。JSONシリアライゼーションとCompactシリアライゼーションの2種類あり、Compactシリアライゼーションがあります。

JWTは、このCompactシリアライゼーションのJWSに、いくつかの有用なパラメータ名を導入して、ログイン情報やアクセス許可情報を伝達できるようにしたものです。主にRESTfulなシステムでの利用を想定していますが、もちろんそれ以外でも利用可能です。既に、GoogleもMicrosoftも大規模に実装して使っています。おそらく、皆さん知らないうちに使ってるんですよね。しかし、RFCになる前から大規模に導入してしまう…しかも、Googleの場合はAndroidに入れてしまってますから、もし変更があったらアップデートが大変なわけですが…勇気には頭が下がります。

というわけで、晴れてRFCになったわけなので、皆さんも心置きなくお使いください。

[1] JSON Simple Sign が2010年だから、5年がかりですね…。IETFでJOSE WGができたのが2011年11月、えらく長くかかりました。
[2] RFCには、Informational, Experimental, Standard と3つのトラックがあり、いわゆる「標準」とされるのはStandard Trackだけです。良く引用されるRFCも、多くはInformationalだったりするので、注意してみてみてください。
[RFC7515] http://www.rfc-editor.org/info/rfc7515
[RFC7519] http://www.rfc-editor.org/info/rfc7519

Mike Jones - MicrosoftJWT and JOSE are now RFCs! [Technorati links]

May 20, 2015 12:54 AM

IETF logoThe JSON Web Token (JWT) and JSON Object Signing and Encryption (JOSE) specifications are now standards – IETF RFCs. They are:

This completes a 4.5 year journey to create a simple JSON-based security token format and underlying JSON-based cryptographic standards. The goal was always to “keep simple things simple” – making it easy to build and deploy implementations solving commonly-occurring problems using whatever modern development tools implementers chose. We took an engineering approach – including features we believed would be commonly used and intentionally leaving out more esoteric features, to keep the implementation footprint small. I’m happy to report that the working groups and the resulting standards stayed true to this vision, with the already widespread adoption and an industry award being testaments to this accomplishment.

The origin of these specifications was the realization in the fall of 2010 that a number of us had created similar JSON-based security token formats. Seemed like it was time for a standard! I did a survey of the choices made by the different specs and made a convergence proposal based on the survey. The result was draft-jones-json-web-token-00. Meanwhile, Eric Rescorla and Joe Hildebrand had independently created another JSON-based signature and encryption proposal. We joined forces at IETF 81, incorporating parts of both specs, with the result being the -00 versions of the JOSE working group specs.

Lots of people deserve thanks for their contributions. Nat Sakimura, John Bradley, Yaron Goland, Dirk Balfanz, John Panzer, Paul Tarjan, Luke Shepard, Eric Rescorla, and Joe Hildebrand created the precursors to these RFCs. (Many of them also stayed involved throughout the process.) Richard Barnes, Matt Miller, James Manger, and Jim Schaad all provided detailed input throughout the process that greatly improved the result. Brian Campbell, Axel Nennker, Emmanuel Raviart, Edmund Jay, and Vladimir Dzhuvinov all created early implementations and fed their experiences back into the spec designs. Sean Turner, Stephen Farrell, and Kathleen Moriarty all did detailed reviews that added ideas and improved the specs. Matt Miller also created the accompanying JOSE Cookbook – RFC 7520. Chuck Mortimore, Brian Campbell, and I created the related OAuth assertions specs, which are now also RFCs. Karen O’Donoghue stepped in at key points to keep us moving forward. Of course, many other JOSE and OAuth working group and IETF members also made important contributions. Finally, I want to thank Tony Nadalin and others at Microsoft for believing in the vision for these specs and consistently supporting my work on them.

I’ll close by remarking that I’ve been told that the sign of a successful technology is that it ends up being used in ways that the inventors never imagined. That’s certainly already true here. I can’t wait to see all the ways that people will continue to use JWTs and JOSE to build useful, secure applications!

May 19, 2015

Mike Jones - MicrosoftThe OAuth Assertions specs are now RFCs! [Technorati links]

May 19, 2015 11:56 PM

OAuth logoThe OAuth Assertions specifications are now standards – IETF RFCs. They are:

This completes the nearly 5 year journey to create standards for using security tokens as OAuth 2.0 authorization grants and for OAuth 2.0 client authentication. Like the JWT and JOSE specs that are now also RFCs, these specifications have been in widespread use for a number of years, enabling claims-based use of OAuth 2.0. My personal thanks to Brian Campbell and Chuck Mortimore for getting the ball rolling on this and seeing it through to completion, to Yaron Goland for helping us generalize what started as a SAML-only authorization-grant-only spec to a framework also supporting client authentication and JWTs, and to the OAuth working group members, chairs, area directors, and IETF members who contributed to these useful specifications.

Mark Dixon - OracleTuring Test (Reversed) [Technorati links]

May 19, 2015 10:13 PM

Turing1

The classic Turing Test, according to Wikipedia, is:

a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. …

The test was introduced by Turing in his 1950 paper “Computing Machinery and Intelligence.” …

As illustrated in the first diagram:

The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. …

In the years since 1950, the test has proven to be both highly influential and widely criticised, and it is an essential concept in the philosophy of artificial intelligence.

Turing2

What if the roles were reversed, and a computer was tasked with determining which of the entities on the other side of the wall was a human and which was a computer?  Such is the challenge for software that needs to decide which requests made to an online commerce system are generated by humans typing on a browser, and which are illicit bots imitating humans.

By one year-old estimate, ”more than 61 percent of all Web traffic is now generated by bots, a 21 percent increase over 2012.” Computers must automatically determine which requests come from people and which come from bots, as illustrated in the second diagram.

While this is not strictly a Turing test, it has some similar characteristics.  The computer below the line doesn’t know ahead of time what techniques the bots will use to imitate human interaction. These decisions need to be made in real time and be accurate enough to prevent illicit bots from penetrating the system. A number of companies offer products or services that accomplish this task.

One might ask, “Does this process of successfully choosing between human and bot constitute artificial intelligence?”

At the current state of the art, I think not, but it is area where enhanced computer intelligence could provide real value.

Radiant LogicIn the Land of Customer Profiles, SQL and Data Integration Reign Supreme [Technorati links]

May 19, 2015 06:27 PM

Last week, we took a look at the challenges faced by “traditional IAM” vendors as they try to move into the customer identity space. Such vendors offer web access management and federation packages that are optimized for LDAP/AD and aimed at employees. Now we should contrast that with the new players in this realm and explore how they’re shaping the debate—and growing the market.

Beyond Security with the New IAM Contenders: Leveraging Registration to Build a More Complete Customer Profile

So let’s review the value proposition of the two companies that have brought us this new focus on customer identity: Gigya and Janrain. For these newcomers, the value is not only about delivering security for access or a better user experience through registration. They’re also aimed at leveraging that registration process to collect data for a complete customer profile, moving from a narrow security focus to a broader marketing/sales focus—and this has some consequences for the identity infrastructure and services needed to support these kind of operations.

For these new contenders, security is a starting point to serve better customer knowledge, more complete profiles, and the entire marketing and sales lifecycle. So in their case it is not only about accessing or recording customer identities, it’s about integrating and interfacing this information into the rest of the marketing value chain, using applications such as Marketo and others to build a complete profile. So one of the key values here is about collecting and integrating customer identity data with the rest of the marketing/sales activities.

At the low level of storage and data integration, that means the best platform for accomplishing this would be SQL—or better yet, a higher-level “join” service that’s abstracted or virtual, as in the diagram below. It makes sense that you’d need some sort of glue engine to join identities with the multiple attributes that are siloed across the different processes of your organization. And we know that LDAP directories alone, without some sort of integration mechanism, are not equipped for that. In fact, Gigya, the more “pure play” in this space, doesn’t even use LDAP directories; instead, they store everything in a relational database because SQL is the engine for joining.

Virtualization Layer

So if we look at the customer identity market through this lens of SQL and the join operation, I see a couple of hard truths for the traditional IAM folks:

  1. First, if we’re talking about using current IAM packages in the security field for managing customer access, performance and scalability are an issue due to the “impedance” problem. Sure, your IAM package “supports” SQL but it’s optimized for LDAP, so unless you migrate—or virtualize—your customers’ identity from SQL to LDAP in the large volumes that are characteristic of this market, you’ll have problems with the scalability and stability of your solution. (And this does not begin to cover the need for flexibility or ease of integration with your existing applications and processes dealing with customers).
  2. And second, if you are looking at leveraging the customer registration process as a first step to build a complete profile, your challenge is more in data/service integration than anything else. In that case, I don’t see where there’s a play for “traditional WAM” or “federation” vendors that stick to an LDAP model, because no one except those equipped with an “unbound” imagination would use LDAP as an engine for integration and joining… :)

The Nature of Nurturing: An Object Lesson in Progressive, Contextual Disclosure

Before we give up all hope on directories (or at least on hierarchies, graphs, and LDAP), let’s step beyond the security world for a second and look at the marketing process of nurturing prospect and customer relationships. Within this discipline, a company deals with prospects and customers in a progressive way, guiding them through each stage of the process in a series of steps and disclosing the right amount of information within the right context. And of course, it’s natural that such a process could begin with the registration of a user.

We’ll step through this process in my next post, so be sure to check back for more on this topic…

SHARE
facebooktwittergoogle_pluslinkedinmail

The post In the Land of Customer Profiles, SQL and Data Integration Reign Supreme appeared first on Radiant Logic, Inc

Matthew Gertner - AllPeersFive Young British Athletes To Watch Out For At The 2016 Rio Olympics [Technorati links]

May 19, 2015 04:24 AM

At the upcoming Summer Olympics in Rio, there will be many young British athletes to watch ... photo by CC user 39405339@N00 on Flickr

The 2014 Nanjing Youth Olympics in China were a seen as a huge success for Team GB with a haul of 24 medals, but which British youngsters will we be seeing with Olympic honours in Rio 2016?

JESSICA FULLALOVE, 18 (SWIMMING)

Jessica Fullalove has her standards set high. She believes that she can be the female equivalent to Michael Phelps and the “Usain Bolt of the pool”. With three silver medals at the Youth Olympics in Nanjing and a senior Commonwealth appearance under her belt, no one is ruling out Jessica’s potential.

The 18 year old from Oldham is already one of UK’s most exciting new sporting stars and there’s no doubt that, by the time she hits the water in Rio, Britain with be full of love for Jessica Fullalove.

SALLY BROWN, 19 (TRACK)

Sally Brown’s progress on the world stage has been hampered by injuries for the last few years but her abilities are highly rated by everyone within the sporting community. Having successfully recovered from the same injury herself, Jessica Ennis-Hill has offered her personal assurance that Sally can make a full recovery from the stress fracture to her right foot that has kept her out of so many competitions in the last year.

Sally, who receives financial help and sports insurance from Sportsaid and Bluefin Sport, has also had to work part time at Sainsbury’s while recovering from her injury as well as tackle her A-Levels at the same time. Despite all this she is predicted to return to the track physically and mentally stronger than ever and at her best she has a real of chance of being among the medals at Rio’s 2016 Paralympics

MORGAN LAKE, 17 (TRACK)

With Olympic champion Jessica Ennis-Hill and British high jump and indoor long jump record holder Katarina Johnson-Thompson both already tipped to be competing for heptathlon medals in Rio it’s unlikely that there is room for another British heptathlon hopeful, but 17-year-old Morgan Lake may have something to say about that. Morgan is not only the double junior heptathlon and high jump world champion but she is also performing and scoring significantly better than Jessica Ennis-Hill and Katarina Johnson-Thompson were at her age.

Last year she broke the world indoor pentathlon record with 4,284 points and the junior high jump record with a jump of 1.93m, so don’t be surprised to find Morgan out breaking more recordings at a senior level in Rio.

CLAUDIA FRAGAPANE, 17 (GYMNASTICS)

Claudia Fragapane is only 4ft 5″ tall but her achievements are already looming large in the gymnastic world. An astonishing four gold medals at the 2014 Commonwealth Games (the first British woman in to achieve this feat in 84 years) saw her go on to win BBC Young Sports Personality of the Year and she is hoping to reproduce that success in Rio.

While 17 may not be that young in the gymnastics, Claudia Fragapane is a British Athlete that has to be mentioned on this list as she has the potential to not only clinch some medals in Rio but also to evolve into one of Britain’s greatest ever gymnasts.

CHRIS MEARS, 22 (DIVING)

Never write off Chris Mears. This is a man who has made a habit of accomplishing seemingly impossible tasks. After unknowingly suffering with enlarged glands from glandular fever, Mears ruptured his spleen while competing at the 2009 Australian Youth Olympic Festival in Sydney.

Having lost five pints of blood and with a blood platelet count at five down from around 400, his parents were informed that his chances of survival were incredibly slim. Mears did survive and seemed to be making an extraordinary recovery when an unexpected seizure put him into a coma.

Doctors informed his family that he was likely to have suffered irreparable brain damage from the severity of the seizure but again Mears defied science and made a full recovery. He was told that he would never dive again but by 2012 he was competing in the London Olympics and by 2014 had won a gold medal at the Commonwealth Games with his partner Jack Laugher. Chris is now waiting for someone to tell him that he won’t be able to win a medal in Rio.

The post Five Young British Athletes To Watch Out For At The 2016 Rio Olympics appeared first on All Peers.

Matthew Gertner - AllPeersGreat Ways to Save on Your Satellite Television Service [Technorati links]

May 19, 2015 04:24 AM

How can you save on your satellite television service? ... photo by CC user Loadmaster  on wikimedia

I love watching television as much as the next person. Being able to catch up on some of my favorite television shows throughout the week, and even watching a flick or two with the family on the weekends can be a great pastime.

However, if you’re like me, you too have a busy schedule that doesn’t allow you to watch television hour after hour. I probably watch a total of ten hours of television (on a lucky week), yet the subscription services can often be kind of costly. Rather than ditch the television service altogether, I decided to switch from cable to satellite services while still looking for huge savings.

Here are a few options you might try to save on your satellite television service at home:

Bundling Services

One option that satellite television service providers have is bundling. This essentially means that you’re able to package your television, internet, and phone services into one for a discounted price. By choosing a bundle package, consumers can save as much as 10-20% on their monthly bill.

However, since satellite television service providers don’t have their own internet and phone services, you will have to receive the bundled discounts through their partnering service providers. There are several for you to choose from so that you can get the landline features and internet speed you need.

Compare Packages

Another way to get your satellite television bill down is to compare the various packages that are offered. Not only should you compare packages between various satellite subscription providers, but you should also compare packages within each company.

Each service provider has several packages (generally a basic, premium, and platinum package). Review each of the packages available to see which one will give you the best options for landline features, internet speeds and channel line ups. Visit tvlocal.com to go over the various packages on offer and choose which one will work best for your entertainment and communication needs.

Ask About Specials

Companies are looking to gain new customers on the regular basis. Therefore, if you really want to get a good deal, contact the company directly to find out what types of specials they might be able to offer you. Sometimes, you’ll find that a customer service representative is willing to offer you more discounts simply to get you signed up as a customer.

Upgraded Technology

Generally with a cable subscription service, you’ll need to have a cable box for every television set you have available. This can cost you an additional rental fee for each box you have. However, satellite service providers have new technology that will allow you to purchase wireless boxes that can connect and be used for multiple television sets. Several satellite television providers also give a huge savings for new customers looking to purchase the latest technology.

If you’re looking for convenient yet affordable ways to save money on your monthly television subscription services, these ideas will certainly help you save a bundle. When choosing the best television service provider, be sure to also compare things such as overall value, channel lineup, and features to get the best bank for your buck.

Now you can keep up with all the latest television shows and movies without having to break the bank. Here’s to binge watching and comfort foods. Enjoy.

The post Great Ways to Save on Your Satellite Television Service appeared first on All Peers.

May 18, 2015

KatasoftREST VS SOAP: When Is REST Better? [Technorati links]

May 18, 2015 11:49 PM

While the SOAP (Simple Object Access Protocol) has been the dominant approach to web service interfaces for a long time, REST (Representational State Transfer) is quickly winning out and now represents over 70% of public APIs.

REST is simpler to interact with, particularly for public APIs, but SOAP is still used and loved for specific use cases. REST and SOAP have important, frequently overlooked differences, so when building a new web service, do you know which approach is right for your use case?

Spoiler Alert: USE REST+JSON. Here’s Why…

SOAP: The Granddaddy of Web Services Interfaces

SOAP is a mature protocol with a complete spec, and is designed to expose individual operations – or pieces of operations – as web services. One of the most important characteristics of SOAP is that it uses XML rather than HTTP to define the content of the message.

The Argument For SOAP

SOAP is still offered by some very prominent tech companies for their APIs (Salesforce, Paypal, Docusign). One of the main reasons: legacy system support. If you built a connector between your application and Salesforce back in the day, there’s a decent probability that connection was built in SOAP.

There are a few additional situations:

Some would argue that because of these features, as well as support for WS_AtomicTransaction and WS_Security, SOAP can benefit developers when there is a high need for transactional reliability.

REST: The Easy Way to Expose Web Services

And yet, most new APIs are built in REST+JSON. Why?

First, REST is easy to understand: it uses HTTP and basic CRUD operations, so it is simple to write and document. This ease of use also makes it easy for other developers to understand and write services against.

REST also makes efficient use of bandwidth, as it’s much less verbose than SOAP. Unlike SOAP, REST is designed to be stateless and REST reads can be cached for better performance and scalability.

REST supports many data formats, but the predominant use of JSON means better support for browser clients. JSON sets a standardized method for consuming API payloads, so you can take advantage of its connection with JavaScript and the browser. Read our best practices on REST+JSON API Design Here.

Case 1: Developing a Public API

REST focuses on resource-based (or data-based) operations, and inherits its operations (GET, PUT, POST, DELETE) from HTTP. This makes it easy for both developers and web-browsers to consume it, which is beneficial for public APIs where you don’t have control over what’s going on with the consumer. Simplicity is one of the strongest reasons that major companies like Amazon and Google are moving their APIs from SOAP to REST.

Case 2: Extensive Back-and-Forth Object Information

APIs used by apps that require a lot of back-and-forth messaging should always use REST. For example, mobile applications. If a user attempts to upload something to a mobile app (say, an image to Instagram) and loses reception, REST allows the process to be retried without major interruption, once the user regains cell service.

However, with SOAP, the same type of service would require more initialization and state code. Because REST is stateless, the client context is not stored on the server between requests, giving REST services the ability to be retried independently of one another.

Case 3: Your API Requires Quick Developer Response

REST allows easy, quick calls to a URL for fast return responses. The difference between SOAP and REST in this case is complexity—-SOAP services require maintaining an open stateful connection with a complex client. REST, in contrast, enables requests that are completely independent from each other. The result is that testing with REST is much simpler.

Helpfully, REST services are now well-supported by tooling. The available tools and browser extensions make testing REST services continually easier and faster.

Developer Resources for REST+JSON API Development

Stormpath is an REST+JSON API-based authentication and user management system for your web and mobile services and APIs. We <3 REST+JSON.

If you want learn more about how to build, design, and secure REST+JSON APIs, here are some developer tutorials and explainer blogposts on REST+JSON API Development:

Mark Dixon - OracleSecurity: Complexity and Simplicity [Technorati links]

May 18, 2015 11:48 PM

Leobruce

It is quite well documented that Bruce Schneier stated that “Complexity is the worst enemy of security.

As a consumer, I think this complexity is great. There are more choices, more options, more things I can do. As a security professional, I think it’s terrifying. Complexity is the worst enemy of security.  (Crypto-Gram newsletter, March 15, 2000)

Leonardo da Vinci is widely credited with the the statement, “Simplicity is the ultimate sophistication,” although there is some doubt whether he actually said those words.

Both statements have strong implications for information security today.

In the March, 2000 newsletter, Bruce Schneier suggested five reasons why security challenges rise as complexity increases:

  1. Security bugs.  All software has bugs. As complexity rises, the number of bugs goes up.
  2. Modularity of complex systems.  Complex systems are necessarily modular; security often fails where modules interact.
  3. Increased testing requirements. The number of errors and difficulty of evaluation grown rapidly as complexity increases.
  4. Complex systems are difficult to understand. Understanding becomes more difficult as the number of components and system options increase.
  5. Security analysis is more difficult. Everything is more complicated – the specification, the design, the implementation, the use, etc.

In his February 2015 article, “Is Complexity the Downfall of IT Security,”  Jeff Clarke suggested some other reasons:

  1. More people involved. As a security solution becomes more complex, you’ll need more people to implement and maintain it. 
  2. More countermeasures. Firewalls, intrusion-detection systems, malware detectors and on and on. How do all these elements work together to protect a network without impairing its performance? 
  3. More attacks. Even if you secure your system against every known avenue of attack, tomorrow some enterprising hacker will find a new exploit. 
  4. More automation. Removing people from the loop can solve some problems, but like a redundancy-management system in the context of reliability, doing so adds another layer of complexity.

And of, course, we need to consider the enormous scale of this complexity.  Cisco has predicted that 50 billion devices will be connected to the Internet by 2020.  Every interconnection in that huge web of devices represents an attack surface.

How in the world can we cope? Perhaps we need to apply Leonardo’s simplicity principle.

I think Bruce Schneier’s advice provides a framework for simplification:

  1. Resilience. If nonlinear, tightly coupled complex systems are more dangerous and insecure, then the solution is to move toward more linear and loosely coupled systems. This might mean simplifying procedures or reducing dependencies or adding ways for a subsystem to fail gracefully without taking the rest of the system down with it.  A good example of a loosely coupled system is the air traffic control system. It’s very complex, but individual failures don’t cause catastrophic failures elsewhere. Even when a malicious insider deliberately took out an air traffic control tower in Chicago, all the planes landed safely. Yes, there were traffic disruptions, but they were isolated in both time and space.
  2. Prevention, Detection and Response. Security is a combination of prevention, detection, and response. All three are required, and none of them are perfect. As long as we recognize that — and build our systems with that in mind — we’ll be OK.This is no different from security in any other realm. A motivated, funded, and skilled burglar will always be able to get into your house. A motivated, funded, and skilled murderer will always be able to kill you. These are realities that we’ve lived with for thousands of years, and they’re not going to change soon. What is changing in IT security is response. We’re all going to have to get better about IT incident response because there will always be successful intrusions.

But a final thought from Bruce is very appropriate. “In security, the devil is in the details, and those details matter a lot.”

May 15, 2015

Mark Dixon - OracleJust Another Day at the Office [Technorati links]

May 15, 2015 09:35 PM

Today’s featured photo from NASA show the Space Station’s crew in an ordinary day of work.

NASA150515

The six-member Expedition 43 crew worked a variety of onboard maintenance tasks, ensuring crew safety and the upkeep of the International Space Station’s hardware. In this image, NASA astronauts Scott Kelly (left) and Terry Virts (right) work on a Carbon Dioxide Removal Assembly (CDRA) inside the station’s Japanese Experiment Module.

For just a day or two, it would be so fun to work in weightless conditions.  Not too probable at this stage of my life, however!

 

GluuOAuth 2.0 as the Solution for Three IoT Security Challenges [Technorati links]

May 15, 2015 05:08 PM

Note: This article was originally published as a guest blog for Alien Vault.

Ideas on managing IoT in your house

While participating on the Open Interconnect Consortium Security Task Group, I offered to describe a use case for Internet of Things (IOT) security that would illustrate how OAuth2 could provide the secret sauce to make three things possible that were missing from their current design: (1) leveraging third party digital credentials (2) centrally managing access to IOT resources in a vendor neutral way; and (3) machine-to-machine discovery and authentication.

IOT physical door locks provide a concrete use case that has intrigued me for a long time–what could be more fundamental to access management than controlling who can enter your house? Wouldn’t it be great if the person could use their state-issued driver’s license to unlock your front door? Two standard profiles of OAuth2 can make this possible: OpenID Connect (to identify you using your driver’s license), and the User Managed Access protocol (UMA), to centralize policy management.

Trusted Credentials & Standard APIs

The idea of a state-issued digital credential is not that crazy. Many countries have digital identifiers. In Switzerland, you can obtain a government issued digital ID in the form of a USB stick called SwissID. But your mobile phone has the potential to be a more convenient credential than a USB stick. And this is exactly the goal of several state issued mobile driver licenses concepts proposed by Delaware and Iowa.

But what API’s will your state publish to enable authorized Web, mobile, or IOT clients to use this new mobile credential? The most likely candidate is the above mentioned OAuth2 profile for authentication: OpenID Connect. Developers are already familiar with OpenID Connect if they’ve ever used the Google authentication API’s.

So, in our hypothetical scenario, we now have our third party digital credential–a state mobile drivers license–and we have OpenID Connect API’s, published by the state, with which to identify the person who was issued the mobile drivers license. The next component to our system is a central security management user interface to enable the homeowner to manage who has the capability to access their home. Conveniently, this same Console can be used to control other IOT devices that have API’s.

Central Permission Management

The reason we need a central management user interface is simple–if every IOT device in your home has its own security management web interface, it won’t scale. There are all sorts of new decisions consumers will have to make. For example:

Using a central policy decision point, people can manage in one place which policies apply to what, without having to go to the web admin page of every device. For short, let’s call this thing the “Console.”

So let’s walk through in a little more detail how this use case would work:

  1. The homeowner would configure their Console to rely on the OpenID Provider (OP) of certain domains. For this example, let’s say there are two domains: 1) mystate.gov and 2) the local domain for your house. You might want a local domain to manage accounts for people who don’t have a driver’s license, like your young kids. For people in your local domain, you’ll also have to manage their credentials, i.e. passwords. This might be a pain in the neck, but at least you don’t have to manage users for every IOT device in your house.
  2. Using OpenID Connect Discovery, the Console could immediately find out the local and state OpenID Connect API URLs, and other information required to securely identify a person at the external domain. The OpenID Connect Discovery spec is very simple. Just make an HTTPS GET request to https:///.well-known/openid-configuration. This will return a JSON object with the URLs for the API’s of the OP, and other information your Console will need, like what kind of cypto is supported, and what kind of authentication is available. If you want to see an example of an OpenID Connect Discovery response, check out Gluu’s OpenID Connect discovery page.
  3. Next your Console would dynamically register itself as a client with the state OpenID Connect Provider (OP) using the OpenID Connect dynamic client registration API. Once completed, the Console will be able to authenticate a person in the respective domain.
  4. The person using the console could then define a policy that describes how a person entering the house should be authorized–for example, using what credentials and during what time of day.
  5. The door lock would use OpenID Connect Discovery and Dynamic Client Registration to register itself with the Console.
  6. The door lock would rely on the console to handle the person’s authentication. The console would call the OpenID Connect authentication APIs at the state, which could result in the state sending a PUSH notification to the person’s pre-registered mobile device. The person might see an alert that says something like “10 Oak Drive Security System wants to verify your first and last name. Is it ok to release this information?” Once approved, the policy decision point can use that information for policy evaluation. For example, perhaps I made a policy that said to let John Smith enter my house from 10am – 2pm. A PUSH notification could be combined with biometric, cognitive, or other physical tokens to make the identification multi-factor. The policy in the Console could even require one of these mechanisms by specifying a specific ‘acr’, that would be provided as part of the response by the state OpenID Connect provider.
  7. The Console has a few ways it could handle enrollment–which users are allowed to enter the house. Access requests could be queued for approval by the homeowner, or perhaps the homeowner registers the person in advance.
  8. What would the interface look like for the door lock? How would the person assert who they are, or enter their username and password for locally managed credentials? Here are a few ideas: the person could enter a short URL on the mobile browser of their phone; the person could read the URL via NFC; a smart service provider could provide an app that uses the geolocation to find the nearest lock.

Of course the door lock might have some fallback mechanisms. Perhaps if Wifi is down, the door might fallback to some kind of proprietary Bluetooth mechanism. Or have a physical key, like a USB key that opens the lock. Or even a high-tech piece of metal, cut in a special unique way that fits into a mechanical unlocking apparatus! Now that would be cool! Oh wait, that’s the key we have today!

May 14, 2015

GluuPart III: Gluu proposes free API Security Certification for Open Source community [Technorati links]

May 14, 2015 08:20 PM

traditional-origami-pigeon

Note: This is Part III of a three part series. Part I and II are published here and here, respectively.

Its so easy to acquiesce to a technology world decided for us by technology giants. However, when it comes to security, if we acquiesce to standards that are too low, it could put the breaks on our ability to benefit from new services.

Consumers, businesses, government and education–every segment of society is being effected by a fundamental shift to digital transactions. The backbone of these digital transactions are APIs. API security certification may seem like a tiny consideration, but the stakes are high.

Gluu very much wants to see a level playing field for API security certification–especially for free open source software (FOSS). We want to see a world where FOSS is the default choice for domain access management. For this reason, we are considering whether Gluu should contribute a portion of its revenues to fund an independent API Security Certification organization, which would enable websites, vendors, and organizations to self-certify for free.

Gluu would not control the organization, in fact there should be a firewall between funding and management. The organization should form policies that exhibit fairness, empathy, and genuine concern for the interest of the community. The goal of certification would be to provide tools and services to engineers to help them get security right. And to provide the public with up-to-date information about how software conforms with the standards defined.

The goal is not to provide a legal mechanism that shifts liability. Gluu believes this important function can be handled between the parties using the technology. More efficient hub-and-spoke legal trust models, such as InCommon in the education sector, can enable a scalable way for people and organizations to manage their relationships with other domains.

There are new standards for API security that are available today and in development. The Internet needs an innovative, comprehensive and democratic certification program for API security. In this instance, we should simply not acquiesce.

Note: This is Part III of a three part series. Part I and II are published here and here, respectively.
 

KatasoftFive Practical Tips for Building Your Java API [Technorati links]

May 14, 2015 07:00 PM

Increasingly, Java developers are building APIs for their own apps to consume as part of a micro-services oriented architecture, or for consumption by external services. At Stormpath we do both, and we’re expert in the “complications” this can create for a development team. Many teams find it difficult to manage authentication and access control to their APIs, so we want to share a few architectural principles and tips to make it easier to manage access to your Java API.

For a bit of context: Stormpath at its core, is a Java-based REST+JSON API, built on the Spring Framework using Apache Shiro as an application security layer. We store user credentials and data on behalf of other companies, so for us security is paramount. Thus, my first requirement for these tips is that they help manage access to your Java API securely.

We also evaluated tips based on whether they work well in a services-based architecture like ours, whether they benefit both internal and public APIs, and whether they offer developers increased speed and security.

On to the fun part!

Secure Authentication Requests with TLS

I like to think of an API as a super-highway of access to your application. Allowing basic authentication requests without TLS support is like allowing people to barrel down your highway… drunk… in a hydrogen powered tank… When that requests lands, it has the potential to wreak havoc in your application.

We have written extensively on how to secure your API and why you shouldn’t use password-based authentication in an API. When in doubt, at a bare minimum, use Basic Authentication with TLS.

How you implement TLS/SSL is heavily dependent on your environment – both Spring Security and Apache Shiro support it readily. But I think devs frequently omit this step because it seems like a lot of work. Here is a code snippet that shows how a servlet retrieves certificate information:

public void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
..........
X509Certificate[] certs = (X509Certificate[]) req.getAttribute("javax.servlet.request.X509Certificate");
.........

Not a lot of work. I’ve posted some links to different tutorials at the bottom if you’re new to TLS or want to ensure you’re doing it right. Pro tip: Basic Auth with TLS is built into Stormpath SDKs.

Build Your Java Web Service with Spring Boot

Spring Boot is a fantastic way to get a Java API into production without a lot of setup. As one blogger wrote, “It frees us from the slavery of complex configuration files, and helps us to create standalone Spring applications that don’t need an external servlet container.” Love. It.

There are a ton of tutorials (see below) on building Restful Web services with Spring Boot, and the great thing about taking this approach is that much of the security is pre-built, either through a sample application or a plugin. SSL in Spring Boot, for instance, is configured by adding a few lines to your application.properties file:

server.port = 8443 server.ssl.key-store = classpath:keystore.jks server.ssl.key-store-password = secret server.ssl.key-password = another-secret

Stormpath also offers Spring Boot Support, so you can easily use Stormpath’s awesome API authentication and security features in your Spring Boot App.

Use a Java API to Visualize Data…About your Java API

So meta. Because APIs are (potentially, hopefully!) managing lots of data, and the availability of that data is critical to downstream connections. For APIs in production, analytics and monitoring are critical. Most users won’t warn you before they start load testing, and good usage insight helps you plan your infrastructure as your service grows.

Like other tech companies (Coursera, Indeed, BazaarVoice), we use DataDog to visualize our metrics and events. They have strong community support for Java APIs, plus spiffy dashboards.

Also, we project our dashboards on a wall in the office. This certainly doesn’t replace pager alerts on your service, but it helps make performance transparent to the whole team. At Stormpath, this has been a great way to increase and inform discussion about our service delivery and infrastructure – across the team.

Encourage Your Users To Secure Their API Keys Properly

The most terrifying thing that happens to me at work is when someone emails me their Stormpath API key. It happens with shocking frequency, so we went on a fact-finding mission and discovered a scary truth: even awesome devs use their email to store their API key and/or password… and occassionally accidentally hit send. When these land in my inbox, a little part of me dies.

Some ideas: – Make your API key button red. I’m not above scaring people with a red button. – At Stormpath we encourage storing the API key/secret in a file only readable by the owner. You can instruct your users to do this via the terminal, regardless of what language they are working in. Our instructions look like this (for an api key named apiKey.properties):

Save this file in a secure location, such as your home directory, in a hidden .stormpath directory. For example:

$ mkdir ~/.stormpath

$ mv ~/Downloads/apiKey.properties ~/.stormpath/

Change the file permissions to ensure only you can read this file. For example:

$ chmod go-rwx ~/.stormpath/apiKey.properties

Stormpath Makes API Authentication, Tokens and Scopes Easy

Finally, a shameless plug: Stormpath automates a lot of functionality for Java WebApps. Our API security features generate your API keys, manage authentication to your API, allow you to control what users with keys have access to via groups and custom permissions, and manage scopes and tokens.

We lock down access with awesome Java security infrastructure. Stormapth for API Security and Authentication and our API Authentication Guide. These features work with all our Java SDKs for Servlets, Spring Boot, Apache Shiro and Spring Security and will save you tons of time.

As ever, feel free to comment if we missed anything or you have additional suggestions. If you want help with your Stormpath setup, email support@stormpath.com and a technical human will get back to you quickly.

Resources

TLS Tutorials & Sample Apps

Spring Boot your REST API service

Ben Laurie - Apache / The BunkerDuck with Orange and Fennel [Technorati links]

May 14, 2015 04:28 PM

Duck breasts
Honey
Soy sauce
Fennel bulbs
Orange

Sous vide the duck breasts with a bit of honey and salt at around 56C for about an hour to 90 minutes (sorry, but sous vide really is the path to tender duck breasts – if you can’t sous vide, then cook them however you like, to rare or medium rare). Let them cool down a little, then fry for a few minutes on each side to brown (if you’ve done the sous vide thing).

Let them rest for 5-10 minutes, slice into 1/4″ slices.

Thinly slice the fennel.

Peel the orange and break the segments into two or three chunks each.

Quickly stirfry the duck breasts for just a short while – 30 seconds or so. Add soy and honey. Throw in the orange chunks and sliced fennel and stirfry until the fennel has wilted slightly and the orange is warm (and the duck is still somewhat rare, so start with pretty rare duck!).

And then you’re done.

I suspect this would be improved with some sesame seeds stirfried just before the duck breasts, but I haven’t tried it yet.

CourionCourion recognized as a Hot Cybersecurity Company to Watch in 2015 [Technorati links]

May 14, 2015 03:54 PM

Access Risk Management Blog | Courion

cybersecurity ventures1 1Cybersecurity Ventures, a research and market intelligence firm focused on companies in the cyber security industry, which it states is projected to grow to more than $155 billion by 2019, recently published the ‘Cybersecurity 500’, what the firm describes as a list of the world’s hottest and most innovative cyber security companies.

We’re delighted that Courion was recognized on the list.

blog.courion.com

May 13, 2015

OpenID.netCertification pilot expanded to all OIDF members [Technorati links]

May 13, 2015 09:52 AM

The OpenID Foundation has opened the OpenID Certification pilot phase to all OpenID members, as the Board previously announced we would do in May. This enables individual and non-profit members to also self-certify OpenID Connect implementations. The OpenID Board has not yet finalized beta pricing to cover the costs of certification applications during the next phase of the 2015 program. OpenID Foundation Members’ self-certification applications will be accepted at no cost during this pilot phase. We look forward to working with all members on the continued adoption of the OpenID Certification program, including individual and open source implementations.

Don Thibeau
OpenID Foundation Executive Director

May 12, 2015

Kevin MarksHow quill’s editor looks [Technorati links]

May 12, 2015 12:03 AM

a bit familiar?

May 11, 2015

Kevin MarksDoes editing UI affect writing style? [Technorati links]

May 11, 2015 11:54 PM

Listening to This Week in Google, Jeff and Gina were debating writing on Medium, with Gina sorry that people didn't use their own blogs any more, and Jeff saying that Medium's styling and editor made him want to write better to suit the medium house style.

So I wonder if Aaron’s new version of Quill, with its medium style editing for micropub, and Kyle’s Fever dream - which posts to wordpress, blogger and tumblr via micropub, could help change this.

Kevin Marks [Technorati links]

May 11, 2015 11:51 PM
having fever dreams about quill

GluuPart II: Beware of a Microsoft-Google Internet Security Oligarchy [Technorati links]

May 11, 2015 05:28 PM

monopolyman

Note: This is Part II of a three part series. Part I and III are published here and here, respectively.

Microsoft and Google agreeing on Internet Security is a good thing. Consensus on standards from leading technology companies is essential to adoption. However, at the same time, such collaboration requires the community to remain vigilant to avoid potential anti-competitive activity that may impede innovation.

Some of you may have already read my previous blog about how the nonprofit OpenID Foundation (OIDF) unfairly handed out a valuable favor to its leading corporate sponsors–participation in a special pilot program to promote a new OpenID Connect certification program. Coincidentally, the two most renown free open source implementations were left out.

For simplicity, let’s just call this episode “FOSSgate” (FOSS = Free Open Source Software).

I was discussing this situation with a friend who happens to be a law professor, and he commented that it raises concerns about anti-competitive behavior. But why are Microsoft and Google, who are bitter competitors, collaborating in the first place? The answer is best explained in a diagram that is adapted from Michael Porter’s value chain concept:

enterprise-competition-diagram

Although Microsoft and Google compete on products, services, business process, and all the other activities in the green part of the above diagram, they do not compete on middleware security, such as the mechanisms defined by the OpenID Connect standards. Non-user facing security is a “supporting activity.” For example, Google does not say that it has better API security than Microsoft (or vice versa). By collaborating on OpenID Connect, Microsoft and Google are simply saving money by pooling their resources for an an expense they anyway have to bear. There is nothing wrong with this–in fact its a beautiful thing when applied correctly.

But there is one thing better than sharing expenses–that is turning an expense into a profit center. Again, this can work great, as long as the power is not wielded in a way that discourages innovation. However, the modus operandi of large technology companies is to protect intellectual property, and create monopolies (or in this case, an oligarchy). In the OIDF board minutes from April 22, 2015, we discover that the OpenID Foundation has registered the “OpenID Registered” certification mark in the US, Canada, EU, and Japan. Such a certification mark is a typical way to create a kind of monopoly.

But how would the OIDF go from Certification Program, to monopoly? The answer is simple: get the OIDF Certification approved for certain types of government transactions, and then be the only one who can issue the required certification mark. To this end, the OIDF is diligently working. The same minutes referenced above also report: “The US federal government is planning to write a profile of OpenID Connect… Apparently the goal is to mirror the US government SAML profile.”

Can the executive director and the OIDF board be trusted to provide the leadership and the requisite amount of oversight to head-off the kind of anti-competitive behavior to which Microsoft and Google are prone, protecting the public’s trust? Was FOSS-gate a singularity, or was it the one cockroach you see, while 1,000 more are hiding in the walls? Before we acquiesce to grant the OIDF an oligarchy that controls Internet Security Certification, I think these questions need to be answered.

Perhaps some of my concerns are covered in an anti-trust statement. For example, OASIS, a model of good governance, publishes their anti-trust guidelines. I couldn’t find this document linked to the OIDF Website, just like I couldn’t find the minutes of the board meetings.

Note: This is Part II of a three part series. Part I and III are published here and here, respectively. 

KatasoftWhat the Heck is OAuth? [Technorati links]

May 11, 2015 03:00 PM

Stormpath spends a lot of time building authentication services and libraries, we’re frequently asked by developers (new and experienced alike): “What the heck is OAuth?”.

There’s a lot of confusion around what OAuth actually is.

Some people consider OAuth a login flow (like when you sign into an application with Google Login), and some people think of OAuth as a “security thing”, and don’t really know much more than that.

I’m going to walk you through what OAuth is, explain how Oauth works, and hopefully leave you with a sense of how and where Oauth can benefit your application.

What Is OAuth?

To begin at a high level, OAuth is not an API or a service: it is an open standard for authorization and any developer can implement it.

OAuth is a standard that applications (and the developers who love them) can use to provide client applications with a ‘secure delegated access’. OAuth works over HTTP and authorizes Devices, APIs, Servers and Applications with access tokens rather than credentials, which we will go over in depth below.

There are two versions of OAuth: OAuth 1.0a and OAuth2. These specifications are completely different from one another, and cannot be used together: there is no backwards compatibility between them.

Which one is more popular? Great question! Nowadays (at this time of writing), OAuth2 is no doubt the most widely used form of OAuth. So from now on, whenever I write just “OAuth”, I’m actually talking about OAuth2 — as it is most likely what you’ll be using.

Now — onto the learning!

What Does OAuth Do?

OAuth is basically a protocol that supports authorization workflows. What this means is that it gives you a way to ensure that a specific user has permissions to do something.

That’s it.

OAuth isn’t meant to do stuff like validate a user’s identity — that’s taken care of by an Authentication service. Authentication is when you validate a user’s identity (like asking for a username / password to log in), whereas authorization is when check to see what permissions an existing user already has.

Just remember that OAuth is a protocol for authorization.

How OAuth Works

There are 4 separate modes of OAuth, which are called grant types. Each mode serves a different purpose, and is used in a different way. Depending on what type of service you are building, you might need to use one or more of these grant types to make stuff work.

Let’s go over each one separately.

The Authorization Code Grant Type

The authorization code OAuth grant type is meant to be used on web servers. You’ll want to use the authorization code grant type if you are building a web application with server-side code that is NOT public. If want to implement an OAuth flow in a server-side web framework like Express.js, Flask, Django, Ruby on Rails, an Authorization Code is the way to go.

Here’s how it works:

Here’s how it typically looks:

Facebook Login

How to Use Authorization Code Grant Types

You’ll basically create a login button on your login page with a link that looks something like this:

https://login.blah.com/oauth?response_type=code&client_id=xxx&redirect_uri=xxx&scope=email

When the user clicks this button they’ll visit login.blah.com where they’ll be prompted for whatever permissions you’ve requested.

After accepting the permissions, the user will be redirected to back to your site, at whichever URL you specified in the redirect_uri parameter, along with an authorization code. Here’s how it might look:

https://yoursite.com/oauth/callback?code=xxx

You’ll then read in the code querystring value, and exchange that for an access token using the provider’s API:

POST https://api.blah.com/oauth/token?grant_type=authorization_code&code=xxx&redirect_uri=xxx&client_id=xxx&client_secret=xxx

NOTE: The client_id and client_secret stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

Once that POST request has successfully completed, you’ll then receive an access token which you can use to make real API calls to retrieve the user’s information from the identity provider.

The Implicit Grant Type

The implicit grant type is meant to be used for client-side web applications (like React.js or Angular.js) that don’t have a server-side component — or any sort of mobile application that can use a mobile web browser.

Implicit grants ideal for client-side web applications and mobile apps because this grant type doesn’t require you to store any secret key information at all — this means you can log someone into your site / app WITHOUT knowing what your application’s client_secret is.

Here’s how it works:

Here’s how it typically looks:

Facebook Login

How to Use the Implicit Grant Type

You’ll basically create a login button on your login page that contains a link that looks something like this:

https://login.blah.com/oauth?response_type=token&client_id=xxx&redirect_uri=xxx&scope=email

When the user clicks this button they’ll visit login.blah.com where they’ll be prompted for whatever permissions you’ve requested.

After accepting the permissions, the user will be redirected to back to your site, at whichever URL you specified in the redirect_uri parameter, along with an access token. Here’s how it might look:

https://yoursite.com/oauth/callback?token=xxx

You’ll then read in the token querystring value which you can use to make real API calls to retrieve the user’s information from the identity provider.

NOTE: The client_id stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

The Password Credentials Grant Type

The password credentials grant type is meant to be used for first class web applications OR mobile applications. This is ideal for official web and mobile apps for your project because you can simplify the authorization workflow by ONLY asking a user for their username and password, as opposed to redirecting them to your site, etc.

What this means is that if you have built your own OAuth service (login.yoursite.com), and then created your own OAuth client application, you could use this grant type to authenticate users for your native Android, iPhone, and web apps.

But here’s the catch: ONLY YOUR native web / mobile applications can use this method! Let’s say you are Google. It would be OK for you to use this method to authenticate users in the official Google Android and iPhone apps, but NOT OK for some other site that uses Google login to authenticate people.

The reason here is this: by using the password credentials grant type, you’ll essentially be collecting a username and password from your user directly. If you allow a third-party vendor to do this, you run the risk that they’ll store this information and use it for bad purposes (nasty!).

Here’s how it works:

Here’s how it looks:

Facebook App Login

How to Use the Password Credentials Grant Type

You’ll basically create an HTML form of some sort on your login page that accepts the user’s credentials — typically username and password.

You’ll then accept the user’s credentials, and POST them to your identity service using the following request format:

POST https://login.blah.com/oauth/token?grant_type=password&username=xxx&password=xxx&client_id=xxx

You’ll then receive an access token in the response which you can use to make real API calls to retrieve the user’s information from your OAuth service.

The Client Credentials Grant Type

The client credentials grant type is meant to be used for application code.

You’ll want to use the client credentials grant type if you are building an application that needs to perform non-user related tasks. For instance, you might want to update your application’s metadata — read in application metrics (how many users have logged into your service?) — etc.

What this means is that if you’re building an application (like a background process that doesn’t interact with a user in a web browser), this is the grant type for you!

Here’s how it works:

How to Use The Client Credentials Grant Type

You’ll fire off a single API request to the identity provider that looks something like this:

POST https://login.blah.com/oauth/token?grant_type=client_credentials&client_id=xxx&client_secret=xxx

You’ll then receive an access token in the response which you can use to make real API calls to retrieve information from the identity provider’s API service.

NOTE: The client_id and client_secret stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

Is OAuth2 Secure?

Let’s talk about OAuth security real quick: “Is OAuth2 secure?”

The answer is, unquestionably, NO! OAuth2 is NOT (inherently) SECURE. Numerous, well-known security issues with the protocol that have yet to be addressed.

If you’d like to quickly get the low down on all of the OAuth2 security issues, I’d recommend this article, written by the famed security researcher Egor Homakov.

So, should you use it anyway? That’s a huge topic we have covered briefly in our post Secure Your API The Right Way. If you need to secure an API, this post will help you choose the right protocol.

Key Takeaways

Hopefully this article has provided you with some basic OAuth knowledge. I realize there’s a lot to it, but here are some key things to remember:

Know What Grant Type to Use

If you’re building an application that integrates with another provider’s login stuff (Google, Facebook, etc.) — be sure to use the correct grant type for your situation.

If you’re building….

Don’t Use OAuth2 for Sensitive Data

If you’re building an application that holds sensitive data (like social security numbers, etc.) — consider using OAuth 1.0a instead of OAuth2 — it’s much more secure.

Use OAuth if You Need It

You should only use OAuth if you actually need it. If you are building a service where you need to use a user’s private data that is stored on another system — use OAuth. If not — you might want to rethink your approach!

There are other forms of authentication for both websites and API services that don’t require as much complexity, and can offer similar levels of protection in certain cases.

Namely: HTTP Basic Authentication and HTTP Digest Authentication.

Use Stormpath for OAuth

Our service, Stormpath, offers the password and client credential workflows as a service that you can add to your application quickly, easily, and securely. Read how to:

If you’ve got any questions, we can be reached easily by email.

And… That’s all! I hope you enjoyed yourself =)

Mark Dixon - OracleDeep Blue Defeated Garry Kasparov [Technorati links]

May 11, 2015 02:14 PM

Eighteen years ago today, on May 10, 1997, an IBM supercomputer named Deep Blue defeated chess champion Garry Kasparov in a six-game chess match, the first defeat of a reigning world chess champion to a computer under tournament conditions.

DeepBlue160pxKasparov160px

Did Deep Blue demonstrate real artificial intelligence? The opinions are mixed. I like the comments of Drew McDermott  Professor of Computer Science at Yale University:

So, what shall we say about Deep Blue? How about: It’s a “little bit” intelligent. It knows a tremendous amount about an incredibly narrow area. I have no doubt that Deep Blue’s computations differ in detail from a human grandmaster’s; but then, human grandmasters differ from each other in many ways. On the other hand, a log of Deep Blue’s computations is perfectly intelligible to chess masters; they speak the same language, as it were. That’s why the IBM team refused to give game logs to Kasparov during the match; it would be equivalent to bugging the hotel room where he discussed strategy with his seconds. Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.

It will be fun to see what the future brings. In the mean time, I like this phrase, which I first saw on a cubicle of a customer in Tennessee, “Intelligence, even if artificial, is preferable to stupidity, no matter how genuine.”

May 10, 2015

Mark Dixon - OracleLockheed SR-71 Blackbird [Technorati links]

May 10, 2015 05:24 PM

The Lockheed SR-71 Blackbird has to be one of the coolest airplanes ever built. Fast, beautiful, mysterious … this plane is full of intrigue!

Sr71

The National Museum of the US Air Force states:

The SR-71, unofficially known as the “Blackbird,” is a long-range, advanced, strategic reconnaissance aircraft developed from the Lockheed A-12 and YF-12A aircraft. The first flight of an SR-71 took place on Dec. 22, 1964, and the first SR-71 to enter service was delivered to the 4200th (later 9th) Strategic Reconnaissance Wing at Beale Air Force Base, Calif., in January 1966. The U.S. Air Force retired its fleet of SR-71s on Jan. 26, 1990, because of a decreasing defense budget and high costs of operation. 

Throughout its nearly 24-year career, the SR-71 remained the world’s fastest and highest-flying operational aircraft. From 80,000 feet, it could survey 100,000 square miles of Earth’s surface per hour. On July 28, 1976, an SR-71 set two world records for its class — an absolute speed record of 2,193.167 mph and an absolute altitude record of 85,068.997 feet.

The closest I ever got to one of these beauties was at the Hill Aerospace Museum near Ogden, Utah. Quite a sight!

May 08, 2015

Mark Dixon - OracleDo We Need a Mobile Strategy? [Technorati links]

May 08, 2015 06:44 PM

It is quite amazing to me how many customers I visit who are really struggling with how to handle mobile devices, data and applications securely.  This week, the following cartoon came across my desk. the funny thing to me is that the cartoon was published in 2011.  Here is is 2015 and we still struggle!

Marketoonist

Mark Dixon - OracleMcdonnell XF-85 Goblin [Technorati links]

May 08, 2015 04:39 PM

I have long been fascinated with airplanes of all kinds. This post is the first of a series of photos of wacky and wonderful aircraft.

We start first with one of the coolest airplanes I have every seen, the Mcdonnell XF-85 Goblin. Only two were built and I saw one of them in the Wright Patterson Air Force Base museum back in the mid 1980’s.

From the Nation Museum of the Airforce site:

The McDonnell Aircraft Corp. developed the XF-85 Goblin “parasite” fighter to protect B-36 bombers flying beyond the range of conventional escort fighters. Planners envisioned a “parent” B-36 carrying the XF-85 in the bomb bay, and if enemy fighters attacked, the Goblin would have been lowered on a trapeze and released to combat the attackers. Once the enemy had been driven away, the Goblin would return to the B-36, hook onto the trapeze, fold its wings and be lifted back into the bomb bay. The Goblin had no landing gear, but it had a steel skid under the fuselage and small runners on the wingtips for emergency landings.

Pretty neat little airplane!

Xf85

May 07, 2015

Mark Dixon - OracleWe Passed! [Technorati links]

May 07, 2015 10:10 PM

In order to register for an interesting online service this afternoon, I had to perform an Internet speed test.  It was nice to know that we (my computer, my internet connection and I) passed quite handily!

A lot of water has passed beneath the proverbial bridge since 300 baud acoustic coupler modems!

Internetspeed

GluuPart I: No TAX on Internet Security Self-Certification [Technorati links]

May 07, 2015 05:08 PM

greedy

Note: This is Part I of a three part series. Part II and III are published here and here, respectively.

The OpenID Foundation (OIDF) recently announced a certification program.

“Google, Microsoft, ForgeRock, Ping Identity, Nomura Research Institute, and PayPal are the first industry leaders to participate in the OpenID Connect Certification program and certify that their implementations conform to one or more of the profiles of OpenID Connect standard.”

How was this elite group selected? Was it based on merit, contribution, level of patronage or simply opportunity?

A clear picture emerges from the meeting minutes. The board picked themselves and their best friends to participate in the initial pilot! Yay Board!

Unless I missed it, there was no notification or outreach to the community asking if anyone wanted to be in this prestigious pilot group that would capture the lion’s share of the press and media publicity. In the minutes, Mike Jones from Microsoft dutifully records on October 2, 2014 that “Don has created a draft workflow for self-certification and a proposed term sheet with Roland Hedberg and his university to create and deploy the conformance testing software. Don will also be visiting Microsoft, Google, and Symantec in the next few weeks and among other topics, will discuss certification with each of them. He has also already discussed it with John Bradley of Ping Identity.”

There are many such notes about further discussions regarding Certification pilot members. By October 29, 2014, the minutes note “several companies have expressed interest being among the first adopters, Forgerock, Google, Microsoft, Ping Identity and Salesforce.”

Of course Gluu has a gripe that we were not included in this group.

Gluu has been participating in OpenID Connect interop tests since 2012. These tests became the basis for the certification program. According to the tests in January 2013, the Gluu Server was the best OpenID Connect Provider (OP) implementation. Since that time, the Gluu Server has continued to test as one of the leading implementations.

Last year, Roland Hedberg, author of the current OpenID Connect certification tests (mentioned above in the meeting minutes), had this to say about the Gluu Server in one of Gluu’s press releases: “I see two main features that speak in favor of the Gluu implementation, it has passed all tests for compliance with the standard with flying colors, and it is one of the most complete implementations of OpenID Connect, making it a singularly useful tool.” Not surprisingly, Gluu’s current results are also quite good.

So despite being one of the most active partners for the real pre-release period of the certification tests, Gluu was excluded from the announcement. I also think MitreID Connect deserved to be given a chance to participate in that pilot. Justin Richer contributed greatly to writing the specification, and he wrote an implementation. In baseball, that would be like pitching and getting an RBI!

In my discussions with the OIDF board members, the justification for the select group of participants was to limit demand on the developer, Roland Hedberg. It took Gluu years to develop its OpenID Connect Provider. It’s hard to believe that a deluge of OpenID Connect Provider implementations will arise and drown the OIDF in self-certification requests. If high demand was a concern, then why is there no mention of resource bandwidth concerns in the many discussions recorded in the OIDF minutes about certification?

But it gets better. After reading the minutes, I had another realization–the reason for the certification program is to force membership in the OIDF. In fact, at first Don Thibeau, the Executive Director at the OpenID Foundation, wanted to use OIDF conformance testing to force registration in both the OIDF and the OIX (the “Open Identity Exchange”), a related organization he also runs. It was like a two-for-one! However, the OIDF board pushed back. Google even expressed concern about the OIDF membership requirement, and asked whether the OIDF would eventually relax the membership requirement.

There also seemed to be some concern about charging for the certification. Ping Identity suggested that a very small fee to financially validate the entity would add value to the program. In subsequent meetings, Ping says they thought there should be no fee for a self-certification program. Kudo’s to Ping… However these concerns did not stop the plan to use conformance testing to force OIDF membership. So while the board might maintain that there is no fee, there is clearly an intention to require membership.

Even non-profits need to have a sustaining business model. Certainly, some for-profit companies, like Nok Nok Labs, charge for standards certification. And the allure of OpenID Connect is its massive applicability. So I don’t blame the leadership of the OIDF for perhaps wondering “wouldn’t it be great if everyone who wants to show their product is OpenID Connect compliant would pay a small fee for the privilege?” The minutes clearly indicate a thought process where membership will be required for certification, or a fee equal to the membership fee will be assessed to non-members.

Its a brilliant plan–MSFT and Google reap most of the savings–their massive scale means they arithmetically have the most to gain by security costs going down. And the OIDF gets funding to make sure most of the intangible benefits (like press opportunities) are routed back to the mother ships.

The OIDF asserts that I see a conspiracy where none exists. That’s just the way it is, and we should accept it… But it is obvious to me that the OIDF’s mission should be to serve the community, not the executive director, or the corporations who occupy the board seats.

Now I know why I wasn’t elected to the OIDF board. Imagine what a pain in the neck I would have been asking all these questions? Frankly, I wonder why we need a dedicated organization, like the OpenID Connect Foundation, for a few specifications? The IETF, OASIS, Kantara or the W3C already have more generic missions.

What is the OIDF board’s advice to Gluu? Simply renew our corporate membership.

So the OIDF’s plan to generate publicity is a success. And now they want to test their business model–that the certification program will drive memberships, starting with Gluu. Our feedback is simple: Gluu will not pay your tariff.

In many ways the future of the Internet is the future of security. Based on this last experience, I am starting to question whether the OIDF has shown whether it is up to that responsibility. If their intention is to increase the quality of OpenID Connect implementations in order to increase security on the Internet, then I applaud them. But right now, I choose not to pay to participate until my concerns can be addressed.

Note: This is Part I of a three part series. Part II and III are published here and here, respectively.
 

May 06, 2015

Matthew Gertner - AllPeersHow to navigate legal issues when buying or selling a business [Technorati links]

May 06, 2015 11:18 AM

An Interview with Achim Neumann from A. Neumann & Associates, President of a leading Business Brokerage in Pennsylvania, New Jersey, Connecticut, Maryland and Delaware.

Achim Neumann from Neumann Associates Biography

Firstly, thanks so much Mr. Neumann for taking the time to answer our questions today. We’re spotlighting Mergers and Acquisitions this month and A. Neumann & Associates were referred to us as experts in M&A, knowledge on legal issues when selling a business/buying a business, etc.

Thanks for having me! My firm, A Neumann and Associates, LLC, has worked on and consulted within a wealth of business transactions, so I appreciate you reaching out for me to help answer your questions.

Firstly, what are 3 things you would quickly advise a business owner on in the preparations of deciding to sell their business? You can keep it short and sweet as we know you could probably go all day!

Most importantly, a business needs to get a fair market valuation into place. It will serve many different purposes: it will obviously establish a value. But it will also insert a discipline to collect all the proper documents needed for a sale. Further, the valuation will allow a buyer to make an offer sooner, and it will allow the business owner to sell faster.

If someone decides to hire a business broker to assist them in their acquisition or sale, how should they go about it?

When evaluating different Mergers & Acquisition professionals and business brokers, make sure that they have complete answers to all of your questions. New Jersey remains a State with no regulation or licensing required to be in the industry, thus, your evaluation becomes ever so more important to selecting the right individual. Other states have similar lack of regulations as well.

Here is a list of things to keep in mind to check up on any potential advisor’s credentials:
•    Is he/she a business broker, or merely a real estate broker attempting to sell businesses?
•    Is he/she affiliated with any key business brokerage organizations?
•    What is the educational background of the professional? Is it Verifiable?
•    Does the professional have a financial-based education, is he familiar with business/personal tax issues?
•    How long has the firm been operating for?
•    Is the broker the principal, or simply an employee with little vested interest in the the business?
•    Other than the brokerage business, has the broker run a business before and thus, can relate to your concerns?

We’ve written about this topic pretty extensively and you can read about it more on our website if you’re interested, http://www.neumannassociates.com/selecting-your-advisor.cfm

We all know legal issues arise when complicated transactions happen, especially when buying or selling a business. What is the best way to avoid a lawsuit from the beginning?

Preferably, a seller has a qualified TRANSACTION attorney in place well ahead of the contemplated sale. This will allow the seller to obtain proper legal advice all the way. The same, by the way, also applies to having a CPA involved.

We sometimes run into the situation where the seller does not have an attorney, and waits until he has an offer “on the table”. This is not an intelligent move.

Offers that are made by buyers are subject to a final Definite Agreement, drawn up typically by the sellers attorney. Such an agreement needs to be reviewed by both parties, and should outline the parameters of the deal, and should prevent lawsuits.

If someone finds themselves being served in the process of selling a business, what should they do?

The first thing they should do is contact their attorney that they should already be working with. Don’t let anyone besides a qualified business attorney give you legal advice.

What sorts of things should be in contracts to ensure a disgruntled buyer doesn’t try and sue after a transaction has occurred, and if they do, that you as the seller are protected?

Usually there are Warranties and Representations in the final Definite Agreement, under which a seller states what he/she “warrants in the sale”. Between such warranties and the prior due diligence executed by the buyer, the buyer should have a fairly good idea of what he/she is buying.

Usually, there are few law suits after a transaction, as a matter of fact, we have seen none for the transactions we closed in the past 10 years due to prior planning, performing due diligence, adhering to the law and

Thank you so much once again for answering our questions.

Thanks again for having me!

The post How to navigate legal issues when buying or selling a business appeared first on All Peers.

OpenID.netCertification Accomplishments and Next Steps [Technorati links]

May 06, 2015 08:08 AM

OpenID Certified markI’d like to take a moment and congratulate the OpenID Foundation members who made the successful OpenID Certification launch happen. By the numbers, six organizations were granted 21 certifications covering all five defined conformance profiles. See Mike Jones’ note Perspectives on the OpenID Connect Certification Launch for reflections on what we’ve accomplished and how we got here.

We applied the meme “keep simple things simple” that was the touchstone when designing OpenID Connect to its certification program. But for as much as we’ve already accomplished, there’s plenty of good things to come. The next steps are to expand the scope of the Certification program along several dimensions, per the OpenID board’s deliberately phased certification rollout plan. I’ll take the rest of this note to outline these next steps.

One dimension of the expansion is to open the program to all members, including non-profit and individual members. This second phase will be open to OpenID Foundation members, acknowledging the years of work that they’ve put into creating OpenID Connect and its certification program.

Closely related to this, the foundation is working to determine our costs for the certification program in order to establish a beta pricing program for the second phase. The board is on record as stating that pricing will be designed with two goals in mind: covering our costs and helping to promote the OpenID Connect brand and adoption.

Putting a timeline on this, the Executive Committee plans to recommend a beta pricing program for the second phase during its meeting on June 4th for adoption by the Board at its meeting during the Cloud Identity Summit on June 10th. We look forward to seeing certifications of open source, individuals’, and non-profits’ implementations during this phase, as well as continued certifications by organizations.

Another dimension of the expansion is to begin relying party certifications. If you have a relying party implementation, we highly encourage you to join us in testing the tests, just like the pilot participants did for the OpenID Provider certification test suite. Please contact me if you’re interested.

See the FAQ for additional information on OpenID Certification. Again, congratulations on what we’ve already accomplished. I look forward to the increasing adoption and quality of OpenID Connect implementations that the certification program is already helping to achieve.

Ludovic Poitou - ForgeRockOpenDJ Nightly Builds… [Technorati links]

May 06, 2015 07:19 AM

For the last few months, there’s been a lot of changes in the OpenDJ project in order to prepare the next major release : OpenDJ 3.0.0. While doing so, we’ve tried to keep options opened and continued to make most of the changes in the trunk/opends part, keeping the possibility to release a 2.8 version. And we’ve made tons of work in branches as well as in trunk/opendj. As part of the move to the trunk, we’ve changed the factory to now build with Maven. Finally, at the end of last week, we’ve made the switch on the nightly builds and are now building what will be OpenDJ 3, from the trunk.

For those who are regularly checking the nightly builds, the biggest change is going to be the version number. The new build is now showing a development version of 3.0.

$ start-ds -V
OpenDJ 3.0.0-SNAPSHOT
Build 20150506012828
--
 Name Build number Revision number
Extension: snmp-mib2605 3.0.0-SNAPSHOT 12206

We are still missing the MSI package (sorry to the Windows users, we are trying to find the Maven plugin that will allow us to build the package in a similar way as previously with ant), and we are also looking at restoring the JNLP based installer, but otherwise OpenDJ 3 nightly builds are available for testing, in different forms : Zip, RPM and Debian packages.

OpenDJ Nightly Builds at ForgeRock.org

We have also changed the minimal version of Java required to run the OpenDJ LDAP directory server. Java 7 or higher is required.

We’re looking forward to getting your feedback.


Filed under: Directory Services Tagged: build, builds, directory-server, ForgeRock, java, ldap, maven, nightly, opendj, opensource, snapshot
May 05, 2015

Radiant LogicWhere Are the Customers’ Yachts? [Technorati links]

May 05, 2015 10:39 PM

Current Web Access Management Solutions Will Work for the Customer Identity Market—If We Solve the Integration Challenge

I find it ironic that within the realm of IAM/WAM, we’re only now discovering the world of customer identity, when the need for securing customer identity has existed since the first business transactions began happening on the Internet. After all, the e-commerce juggernauts from Amazon to eBay and beyond have figured out the nuances of customer registration, streamlined logons, secure transactions, and smart shopping carts which personalize the experience, remembering everything you’ve searched and shopped for, in order to serve up even more targeted options at the moment of purchase.

It reminds me of a parable from a classic book on investing*: Imagine a Wall Street insider at the Battery in New York, pointing out all the yachts that belong to notorious investment bankers, brokers, and hedge fund managers. After watching for a while, one lone voice pipes up and asks: “That’s great—but where are the customers’ yachts?

Could this new focus on “customer identity” be an attempt by IAM/packaged WAM vendors to push their solution toward what they believe is a new market? Let’s take a look at what would justify their bets in the growing customer identity space.

Customer Identity: The Case for the WAM Vendors

The move to digitization is unstoppable for many companies and sectors of the economy, opening opportunities for WAM vendors to go beyond the enterprise employee base. As traditional brick and mortar companies move to a new digitized distribution model based on ecommerce, they’re looking for ways to reach customers without pushing IT resources into areas where they have no expertise.

While there are many large ecommerce sites that have “grown their own” when it comes to security, a large part of this growing demand will not have the depth and experience of the larger Internet “properties.” So a packaged solution for security makes a lot of sense, with less expense and lower risks. And certainly, the experience of enterprise WAM/federation vendors, with multiple packaged solutions to address the identity lifecycle, could be transferred to this new market with success. However, such a transition will need to address a key challenge at the level of the identity infrastructure.

The Dilemma for WAM Vendors: Directory-Optimized Solutions in a World of SQL

As we know, the current IAM/WAM stack is tightly tied to LDAP and Active Directory—these largely employee-based data stores are bolted into the DNA of our discipline, and, in the case of AD, offer an authoritative list of employees that’s at the center of the local network. This becomes an issue when we look at where the bulk of customer identities and attributes are stored: in a SQL database.

So if SQL databases and APIs are the way to access customer identities, we should ask ourselves if the current stack of WAM/federation solutions, built on LDAP/AD to target employees, would work well as well with customers. Otherwise, we’re just selling new clothes to the emperor—and this new gear is just as invisible as those customers’ yachts.

Stay tuned over the next few weeks as I dive deeper into this topic—and suggest solutions that will help IAM vendors play in the increasingly vital world of customer identity data services.

*Check out “Where Are the Customers’ Yachts: or A Good Hard Look at Wall Street” by Fred Schwed. A great read—and it’s even funny!

SHARE
facebooktwittergoogle_pluslinkedinmail

The post Where Are the Customers’ Yachts? appeared first on Radiant Logic, Inc

Mark Dixon - OracleKuppingerDole: 8 Fundamentals for Digital Risk Mitigation [Technorati links]

May 05, 2015 08:45 PM

Mk

Martin Kuppinger, founder and Principal Analyst at KuppingerCole recently spoke in his keynote presentation at the European Identity & Cloud Conference about how IT has to transform and how Information Security can become a business enabler for the Digital Transformation of Business

He presented eight “Fundamentals for Digital Risk Mitigation” 

  1. Digital Transformation affects every organization 
  2. Digital Transformation is here to stay
  3. Digital Transformation is more than just Internet of Things (IoT) 
  4. Digital Transformation mandates Organizational Change
  5. Everything & Everyone becomes connected 
  6. Security and Safety is not a dichotomy 
  7. Security is a risk and an opportunity 
  8. Identity is the glue and access control is what companies need

I particularly like his statements about security being both risk and opportunity and that “Identity is the glue” that holds things together.

Wish I could have been there to hear it in person.

Mark Dixon - OracleFirst American in Space – May 5, 1961 [Technorati links]

May 05, 2015 08:24 PM

Fifty four years ago today, on May 5, 1961, a long time before I knew anything about Cinco de MayoMercury Astronaut Alan B. Shepard Jr. blasted off in his Freedom 7 capsule atop a Mercury-Redstone rocket. His 15-minute sub-orbital flight made him the first American in space

His flight further fueled my love for space travel that had been building since the Sputnik and Vanguard satellites were launched a few years previously.

 

Alan Shepard, Mercury-Redstone Rocket

Kantara InitiativeKantara UMA Standard Achieves V1.0 Status, Signifying A Major Milestone for Privacy and Access Control [Technorati links]

May 05, 2015 12:55 PM

Kantara Initiative is calling on organizations to implement User-Managed Access in applications and IoT systems

Piscataway, NJ, May 5, 2015 – Kantara Initiative announces that the User-Managed Access (UMA) Version 1.0 specifications have achieved the status of Kantara Initiative Recommendations through an overwhelming show of support from the organization’s Members. To mark this milestone, Kantara will be holding a free live webcast on May 14 at 9am Pacific.

Developed through an open and transparent standards-based approach, the UMA web protocol enables both privacy-enhancing consumer-controlled scenarios for release of personal data and next-generation business scenarios for access management. The UMA Work Group has identified a growing variety of use cases, including patient-centric health data sharing, citizen-to-government attribute control, student-consented data sharing, corporate authorization-as-a-service, API security, Internet of Things access control, and more.

“UMA has been generating industry attention with good reason. UMA bridges a critical gap by focusing on customer and citizen engagement to transform privacy considerations into real business development opportunities,” said Joni Brennan, Executive Director, Kantara Initiative.

UMA is an OAuth-based protocol designed to give a web user a unified control point for authorizing who and what can get access to their online personal data.  By letting a user lodge policies with a central authorization service that requires a requester “trust elevation” (for example, proving who they are or promising to adhere to embargoes) before that requester can access data, UMA enables privacy controls that are individual-empowering – an idea that has perhaps gotten lost in the rush to corporate privacy practices that have focused on compliance.

This model enables individuals interacting with the web to conveniently reuse “sharing circles” and set up criteria for access at a single place, referred to as the UMA authorization server, and then go about their lives. For enterprises, deploying UMA allows applications to be loosely coupled to authorization methods, significantly reducing complexity, and to make the process of access decision-making more dynamic.

“Existing notice-and-consent paradigms of privacy have begun to fail, as evidenced by the many consumers and citizens who feel they have lost control of how companies collect and use their personal information,” said Eve Maler, ForgeRock’s VP of Innovation & Emerging Technology and UMA Work Group Chair. “We’re excited that UMA’s features for asynchronous and centralized consent have matured to reach V1.0 status.”

“The future Internet is very much about consumer personal data as an important part of the broader data-driven economy ecosystem. If personal data is truly a digital asset, then consumers need to ‘own’ and control access to their various data repositories on the Internet. The UMA protocol provides this owner-centric control for sharing of data and resources at Internet scale,” says Thomas Hardjono, Executive Director of the MIT Kerberos & Internet Trust Consortium and UMA Work Group specification editor.

“With the growing importance of personal data on the Internet, there is a clear need for new ways to allow individual users be in control of their data as an economic asset.” says Dr. Maciej Machulak, Chief Identity Architect of Synergetics and UMA Work Group Vice-Chair. “UMA can become the very basis for the profound trust assurance and notably the trust perception with end-users and organizations, that is required to finally introduce end-users as genuine stakeholders in their own processes and the integration point of their own data.”

Companies, organizations, and individuals can get involved by joining Kantara Initiative and the UMA Work Group, taking part in planned interoperability testing, and attending the webcast.

“In the Digital Economy where personal data is the new currency, User-Managed Access (UMA) provides a unique vision to empower individual more effectively and efficiently and enables a new approach to secure and protect distributed resources, unlocking the value of personal data.” Said Domenico Catalano, Oracle

“UMA promotes privacy by facilitating access by reference instead of by copy and, most important, by shifting access controls away from inscrutable prior consent to user-transparent authorization.” Said Adrian Gropper, MD CTO, Patient Privacy Rights.

“UMA is the first standard to enable centralized API access management for individuals or organizations. The promise of UMA is to enable the consolidation of security for a diverse group of cloud services. Combined with OpenID Connect for client and person identification, the Internet now has a modern standards infrastructure for Web and mobile authentication and authorization.” Said Mike Schwartz, Founder & CEO, Gluu

“UMA is a major step forward in giving individuals control over their own personal data on the internet. It is a key building block of an environment where people can continuously control access to their sensitive data, rather than simply handing that data over to vendors and hoping they don’t misuse it (or lose it).” Said Gil Kirkpatrick CTO ViewDS Identity Solutions

Kantara Initiative provides strategic vision and real world innovation for the digital identity transformation. Developing initiatives including: Identity Relationship Management, User-Managed Access (EIC Award Winner for Innovation in Information Security 2014), Identities of Things, and Minimum Viable Consent Receipt, Kantara Initiative connects a global, open, and transparent leadership community. Luminaries from organizations including: CA Technologies, Experian, ForgeRock, IEEE-SA, Internet Society, Radiant Logic and SecureKey drive strategic insights to progress the transformational elements needed to leverage borderless Identity for IoT, access control, context, and consent.

Mark Dixon - OracleIAM Euphemism: Opportunity Rich Environment [Technorati links]

May 05, 2015 03:36 AM

Recently I heard a  executive who had been newly hired by a company describe their current Identity and Access Management System as an “Opportunity Rich Environment”. Somehow that sounds better than “highly manual, disjointed, insecure and error-prone,” doesn’t it?

 

May 03, 2015

Rakesh RadhakrishnanThreat IN based AuthN controls, Admission controls and Access Controls [Technorati links]

May 03, 2015 09:52 PM
For large enterprises evaluating next generation Threat Intelligence (Incident of Compromise detection tools) platforms such as Fireye, Fidelis and Sourcefire, one of the KEY evaluation criteria is how much of this Threat Intelligence generated can act as Actionable Intelligence. This requires extensive integration of the Threat IN platform with several control systems in the network and end points. It may also include several "COA" several recommended course of actions in the STIX XML attribute set based on the malware detected. This approach paves the way for enterprise to mature their security architecture into one that is Threat Intelligence Centric and Adaptive to such Threat IN. This integration to a Threat IN platform and a Threat Analytics Platform can range from:
Mobile end points and APT integration similar to Fireeye and Airwatch or Fireeye and Mobile Iron
Kudos to fireeye for an amazing set of security controls integration and their support for STIX. Integrating security systems together for cross control co-ordination is very COOL ! Threat IN standards such as TAXII, STIX and CybOX - allow for XML based expression of an "incident of compromise" STIX and secure straight through integration with TAXII. Since dozens of vendors have started expressing AC policies in XACML from IBM Guardium, to Nextlabs DLP and FireLayer Cloud Data Controller, to Layer7 (XML/API firewalls) and Queralt (PAC/LAC firewalls), it ONLY natural to expect STIX profile support XACML (hopefully an effort from OASIS in 2015). The extensibility of XACML, allows for expression of ACL, RBAC, ABAC, RiskADAC and TBAC all in XACML and the policy combination algorithms in XACML can easily extend to "deny override" when it comes to real time Threat Intelligence. This approach will allow enterprise to capture Threat IN and implement custom policies based on the IN in XACML without having one off integration and vendor LOCK IN ! Similar to the approach proposed by OASIS here. Its good to see many vendors supporting STIX - including Splunk, Tripwire and many more.


Why do we need such Integrated Defense?

Simply having Breach Detection capabilities will not suffice, we need both prevention and tolerance as well.

One threat use case model can include every possible technology put to use; for example clone-able JSONBOTS (json over XMPP) infused from Fast-flux Domains, using Trojan Zebra style (parts of the code shipped randomly), assembled at a pre-determined time (leveraging Zero Day vulnerability) along with a external Commander, appearing in the server side components of the network, making Secure Indirect Object References (the opposite of IDOR) as they have been capturing the direct object names needed (over time with BigData Bots already - reconnaissance), to ex-filtrate sensitive data in seconds, and going dormant in minutes. One malware use case - that is a Bot/BotNet, C&C, ZeroDay, TorjanZebra and an APT that leverages Big Data (all combined).

You don't know what hit you (at the application layer as it is distributed code via distributed injections that was cloned and went dormant in seconds), where it came from (not traceable to IP addresses or domains FFD), how it entered (trojan zebra), when it came alive (zero day), how persistent it can be (cloning), what path it took (distributed) and what data it stole?

This is the reality of today ! Quite difficult to catch in a VEE (virtual execution environment) as well. What is needed is a combination of advanced Threat Detection (threat intelligence and threat analytic) combined with responsive/dynamic Threat Prevention systems (threat IN based access controls including format preserving encryption and fully homogenized encryption ) and short lived, stateless, self cleansing ( http://www.scitlabs.com/en/ ) Threat Tolerant systems, as well (which can include web containers and app containers). Scitlabs like technology used for continually maintaining High Integrity network security services (rebooting from a trusted image) is very critical, to ensure that the preventive controls at the Data Layer will work (consistent, cohesive and co-ordinated data object centric policies in DLP, DB FW and Data Tokenization engines).

If the Data is the crown jewel the thief's are after imagine - U know your community gates are breached and you have a warning - you would lock down your house and ship that "pricey diamond" via a tunnel to the east coast ! wont you...  Even in case the thief barges through your front door gets to the safebox and breaks it open ONLY to find the diamond GONE ! (that's intrusion tolerant designs). And fortunately in the digital world that's quite possible - Data Center is identified with a IOC by a Fireeye or Fidelis, instantly the AC policies for AuthN to End point to App and Data Changes, while at the same time the sensitive data (replicated to an UN breached DR site) hosting DBMS is quiesced and data is purged.

Dynamic Defensive Designs #1

Rakesh RadhakrishnanSorting the Spectrum of SQLinjection Threats [Technorati links]

May 03, 2015 05:11 AM
I read the whitepapers from waratek on SQLi and this recent writeup on the role of a IDS/IPS in detecting SQLi's as well. There are in fact several Phd thesis on the topic of SQLi, like this one looking at the TAINT style characteristics of SQLi and this one that discusses topics from Input Validation to static analysis.


To me there are 9 layers of defense against the 100 types of SQLi threats out there (listed are 14 such threats).






1st and foremost at Development and Design time of an application make sure to have a Data Abstraction layer (UI layer, API layer, App Layer, DAL Layer followed by the Data repository (DB) layer). The Data abstraction layer maps security sensitive objects in a DB like table name, column name and stored procedures as objects for the application to reference (secure indirect object reference as opposed to insecure direct object reference). If the design is done right even for use cases that require distributed queries (SQL embedded in XML) there is no need for SQL to traverse between federated parties (an indirect reference to an object that maps to a SQL stored procedure will do the job). Similarly dynamic SQL can be avoided in totality in many cases and in certain cases the curses of dynamic SQL can be avoided.




In addition (2nd) an Application Penetration test tool like HP Webinspect can also highlight the SQLi vulnerabilities in an application so that they are rectified at deployment time.





3rd beyond design time, development time and deployment time - once the application is up and running in a Private Cloud DC or a public cloud, run time firewalls includes an XML/API firewall. The role of such XML and API firewalls are to ensure parameter (input) validation, SQLembeddedinXML validation, etc. (policies can be fine grained for each XML object from and to destination URI's).





Then (4th) comes a run time Application Firewall such as an Imperva and Waratek to validate the integrity of any dynamic SQL generated at run time and look for tainted ones.





Beyond that (5th) comes functional access control's offered by vendors such as Axiomatics for applications to ensure role based and attribute based fine grained access to functional modules within the application is externalized and reused from an entitlement engine, that can also have some additional SQL filters (row level, column level and even cell level).  These are called potentially; from an XML firewall to a XACML engine.





The 6th layer of defense is a Database Firewall that also supports XACML that protects the DBMS form boot time to run time to quiesce time to shutdown time working in alignment with the Data Abstraction Layer. The DB firewall can work against SQLinections stemming from both internal threats as well as external threats. In Guardium access policies, extraction policies and exception policies can all be expressed in XACML. Semantics clustering algorithms can be used to inspect SQL statements in a DB firewall including StoredProcs. The DB firewalls are also performing a critical role around privileged access isolation for DB Admin and privileged access to sensitive data is not possible (PII, PCI data encrypted even for privileged admin).





Then comes a APT firewall like Fireeye (7th layer) that inspects the payload and also identifies SQLi attacks stemming from end points and via inspecting the https payoad.



Even Cisco ISE like solutions that are APP aware has a role to play to take into account strong user and device authentication context, access network context and VPN context to allow access to apps that contain PCI or PII data.



Eventually SIEM and IDS/IPS also play a role of detecting anomalies and reporting on them.


Amongst the SQLinjection defense mechanisms discussed at OWASP- such as prepared statements, stored procedures, DB specific escaping, least privileges, white listing, input validation, majority are meant to be defense mechanisms at design time. While running penetration tests for SQLinjections all possible exploitation techniques must be validated. 


SQL injections are complex, there is NO silver bullet answer, what we really need is co-ordinated Threat Intelligence driven layered controls, that can address the hundreds of SQLi Threat Use Cases end to end.






May 01, 2015

Gluu2 Approaches to Open Source Single Sign-On (SSO) and Access Management [Technorati links]

May 01, 2015 08:25 PM

2_sso_options

Due to tightening regulations, increased usage of third-party applications, and the sheer volume of breaches caused by weak credentials, single sign-on (SSO) is increasingly becoming a ubiquitous enterprise security requirement. Many organizations also need to centralize policies to control access to valuable API’s or Web resources.

SaaS services seem like a good option at first, but if you have a complex integration with a legacy backend, or you’re just a paranoid security nerd, the DIY open source approach has distinct advantages with regard to cost, security and privacy. In this blog, we’ll discuss the two ways to protect an application using a centralized access management platform like the Gluu Server.

How do you control access to a home grown application?

For many organizations, purpose-built applications provide a competitive advantage. Two application security design patterns have emerged. Which one to pick, depends on the trade-off between easier devops, and how deeply you want to integrate centralized security policies into your application.

Web Server Filter / Reverse Proxy

This is the tried and true approach since the introduction of Netegrity Siteminder in 1998–install an Apache Web Server “mod”, or an IIS “ISAPI Filter” to enforce the presence of a token in a HTTP Request. If no token is present, the Web server may re-direct the person, or return a meaningful code or message to the application. Your devops team will love this approach–they can just manage the web server configuration files. It will be crystal clear to them what policies apply to what URLs.

To require SAML authentication in a web server, Gluu recommends the mod_shib. If you’d prefer to use the Gluu Server’s OpenID Connect interfaces, we recommend mod openidc. Gluu is working on an apache module for UMA 1.0, a new OAuth2 based profile that defines RESTful, JSON-based, standardized flows and constructs for coordinating the protection of any API or web resource…. stay tuned but it should be available by the end of Q2 2015.

An example of the UMA Apache directives:


UmaAuthorizationServer gluu.example.com
UmaResourceName "Protected Part of My Website"
UmaGetScope "https://example.com/uma/read"
UmaPutScope "https://example.com/uma/write"
UmaPostScope "https://example.com/uma/create"
UmaDeleteScope "https://example.com/uma/delete"
UmaSentUserClaims "issuingIDP;givenName;mail;uid"


In the hypothetical example above, the Apache server would require different UMA scopes to perform different HTTP methods in the /protected folder. User claims (or attributes) gathered in the Apache server could also be sent. In the example above, perhaps a SAML authentication happened. The attributes returned by the SAML IDP can be sent to the UMA Authorization Server.

Leverage OAuth2 directly in your application

Libraries exist for SAML, OpenID Connect and UMA in many languages, for example Java and Python. If you want to use UMA or OpenID Connect, and no library exists, your application could use the Gluu OXD server as a mediator. Your application would use local sockets (non-encrypted) to communicate with OXD using a simple JSON protocol.

In general, calling the API’s directly will enable your developers to make your application “smarter.” For example, you could implement transaction level security more easily. This can have a positive impact on usability. Giving developers more ability to leverage centralized policies may also increase re-use of policies, and ultimately result in better security for a number of reasons.

One more consideration: many developers find it annoying to test the code if a security system relies on a Web Server plugin. It hasn’t stopped the wide adoption of Web plugins, but of course we want to make those developers happy!

Gluu Server, best of both worlds

Both approaches will work, the question is which is a better solution for your requirements. Schedule a meeting with us if you would like to discuss your project requirements and we can help steer you in the right direction. We can also provide referrals to certified Gluu partners for design and integration services as needed.

Mike Jones - MicrosoftPerspectives on the OpenID Connect Certification Launch [Technorati links]

May 01, 2015 05:45 AM

OpenID Certified logoMany of you were involved in the launch of the OpenID Foundation’s certification program for OpenID Connect Implementations. I believe that OpenID Certification is an important milestone on the road to widely-available interoperable digital identity. It increases the likelihood that OpenID Connect implementations by different parties will “just work” together.

A fair question is “why do we need certification when we already have interop testing?”. Indeed, as many of you know, I was highly involved in organizing five rounds of interop testing for OpenID Connect implementations while the specs were being developed. By all measures, these interop tests were highly effective, with participation by 20 different implementations, 195 members of the interop testing list, and over 1000 messages exchanged among interop participants. Importantly, things learned during interop testing were fed back into the specs, making them simpler, easier to understand, and better aligned with what developers actually need for their use cases. After improving the specs based on the interop, we’d iterate and hold another interop round. Why not stop there?

As I see it, certification adds to the value already provided by interop testing by establishing a set of minimum criteria that certified implementations have been demonstrated meet. In an interop test, by design, you can test the parts of the specs that you want and ignore the rest. Whereas certification raises the bar by defining a set of conformance profiles that certified implementations have been demonstrated to meet. That provides value to implementers by providing assurances that if their code sticks to using features covered by the conformance tests and uses certified implementations, their implementations will seamlessly work together.

The OpenID Foundation opted for self-certification, in which the party seeking certification does the testing, rather than third-party certification, in which a third party is paid to test the submitter’s implementation. Self-certification is simpler, quicker, and less expensive than third-party certification. Yet the results are nonetheless trustworthy, both because the testing logs are made available for public scrutiny as part of the certification application, and because the organization puts its reputation on the line by making a public declaration that its implementation conforms to the profile being certified to.

A successful certification program doesn’t just happen. At least a man-year of work went into creating the conformance profiles, designing and implementing the conformance testing software, testing and refining the tests, testing implementations and fixing bugs found, creating the legal framework enabling self-certification, and putting it all in place. The OpenID Connect Working Group conceived of a vision for a simple but comprehensive self-certification program, created six detailed conformance profiles based on the requirements in the specs, and quickly addressed issues as participants had questions and identified problems during early conformance testing. Roland Hedberg did heroes’ work creating the conformance testing software and responding quickly as issues were found. Don Thibeau shared the vision for “keeping simple things simple” and extended that mantra we employed when designing OpenID Connect to the legal and procedural frameworks enabling self-certification. And many thanks to the engineers from Google, ForgeRock, Ping Identity, NRI, PayPal, and Microsoft who rolled up their sleeves and tested both their code and the tests, improving both along the way. You’ve all made a lasting contribution to digital identity!

I think the comment I most appreciated about the certification program was made by Eve Maler, herself a veteran of valuable certification programs past, who said “You made it as simple as possible so every interaction added value”. High praise!

Here’s some additional perspectives on the OpenID Certification launch:

Matthew Gertner - AllPeersHow to Go To University The Smart Way [Technorati links]

May 01, 2015 02:01 AM

As good as this college looks, be sure to learn how to go to university the smart way first ... photo by CC user Ericci8996 on wikimedia

In the past generation, many students have gone to college as if they were on autopilot, naively assuming that they could redeem their diploma at the end for a well-paying career that would also serve as a never-ending fountain of self-fulfillment.

These days, present grads are struggling to find their place in the working world due to their myopic approach, but you can learn from their misfortune by decided to learn how to go to university the smart way. Here are several ways that you can go into post-secondary education as intelligently as possible…

 

photo by CC user frontierofficial on flickr

Take a year off after high school

Most students rush straight from their suburban bubble straight out of high school without having a sense of the wider world around them. How are you supposed to know what you like without having gotten out into the wider world for a year?

Whether you choose to circumnavigate the world in pursuit of different cultures, volunteering to help right various social injustices, or simply choosing to enter the workforce to save extra money and to see what the 9-5 lifestyle is like, you’ll have a better idea of what you want to get out your life after your gap year has concluded.

photo by CC user SEVENHEADS on pixabay

Evaluate your interests – what are you passionate about?

Going after a degree solely for the money is a huge mistake that ends in disaster for many students. If you enjoy sports, don’t feel like you have to take an IT degree because you feel its the only path to riches.

Any graduate with any degree will succeed based on their motivation to pursue a specific career. If you detest what you are studying, your chances for financial prosperity after graduation will be lower than those that eat and breathe what they are learning.

photo by CC user David L Roush on wikimedia

Apply for every scholarship and bursary for which you qualify

Now that you have selected your target faculty and school, you’ll need every bit of financial assistance you can get your hands on. These days, all but the wealthiest kids are having a hard time with the exploding cost of education. An hour per day of filling out applications now = much less debt in the future that can tie your hands with regards to your options after college.

photo by CC user David Maiolo on wikimedia

Consider community college for cheaper tuition for your first two years of post secondary

Forget about the negative connotations that community colleges have had in the past, as the expensive nature of top-line education has made it pragmatic to entertain the possibility of taking entry level courses there for your first couple of years of post secondary.

If these offerings are honored by your preferred four year college, go ahead and save a few bucks by living at home and commuting to a community in your backyard … your future finances will thank you for it!

photo by CC user Brakelightson on wikimedia

Before deciding on a major, job shadow a professional in fields in which you are interested

The last crucial decision you’ll have to make in college relates to what you’ll major in (i.e. your specialty within your degree, such as biology is you are studying science). At this point, it is more useful to actually follow around a professional in your target careers before making a decision, as stat sheets relaying average salaries mean nothing without the context of knowing what the line of work is actually like from day-to-day.

The post How to Go To University The Smart Way appeared first on All Peers.

Matthew Gertner - AllPeersThe Best Apps for Making Money [Technorati links]

May 01, 2015 01:58 AM

The Best Apps for Making Money will help you stack the Benjamins like this in no time flat ... photo by CC user 68751915@N05 on Flickr

Apps: they entertain us, guide us to our destinations (unless you still use Apple Maps…), and they clue us in on that hot new restaurant around the corner. However, can they help us solve one of the average person’s biggest problems, which is how to save any percentage of their paltry paycheck?

They certainly can, as the four best apps for making money will aid you significantly in the achievement of this difficult goal…

photo via http://www.droid-life.com/

1) Acorns

Ever whine and complain about how hard it is to save money? For most, it isn’t because they lack the income to set aside a few dimes for the future, it’s because they lack a system that makes that task easy and automatic.

Acorns ticks both of these boxes, as it rounds up each purchase you make with your app to the nearest dollar, investing the pocket change in one of five investment funds, ranging in risk from very conservative to very aggressive.

You can also set up automatic debits that will transfer a pre-determined sum from your checking account to your investments on a daily, weekly or monthly basis … automatically.

photo via http://www.ideatoappster.com/

2) TaskRabbit

Those that have the hustling gene in them will want to sign up for TaskRabbit, as it connects individuals with random tasks that need completion with ambitious errand runners with a passion for cold hard cash money.

From assembling IKEA furniture to walking your energetic dogs, the jobs that you could complete for complete strangers are numerous and never ending … all you need is a git’er done attitude and you’ll be well on your way to spinning up more moolah in a day than you ever thought possible.

photo via http://everything-pr.com/

3) Mobee

Ever hear about one of those nebulous mystery shopping gigs and wondered what it would be like to try it out for a day? Mobee is an app that makes that possibility a reality, as it pays people to stroll the halls of your favorite local retail outlets and complete a list of five to ten questions based on your experience there. Doing this gets you points that you can redeem for gift cards, products or dollar bills, so download it today and begin earning!

photo via http://www.appszoom.com/

4) Ebates

By now, you have probably heard of cash back credit cards, which allots a small percentage of your purchases per month back as a credit to your bank account or your card balance. Ebates works on roughly the same principle, allocating a set amount (dependent on the deal struck with each retailer) back to your linked account when you shop through the Ebates app.

The post The Best Apps for Making Money appeared first on All Peers.

April 30, 2015

Jeff Hodges - PayPalHTTP cookie processing algorithm in terms of Same Origin Policy and “effective Top Level Domains (eTLDs) [Technorati links]

April 30, 2015 09:22 PM

This is a community-service posting: The purpose is to unambiguously state the specification of “cookie processing wrt public suffixes”.

Why go thru the effort of doing this: It is somewhat difficult to tease this out of the requisite specification(s) and associated documents, e.g., [RFC6265] and the effective Top Level Domain List, and so here it is (corrections/comments welcome)..

HTTP cookie processing algorithm in terms of Same Origin Policy and “effective Top Level Domains (eTLDs)” aka “Public Suffixes”

=JeffH sez: it’s long — read it anyway :)

WAYF News&quot;PHPH&quot; metadata handler launched [Technorati links]

April 30, 2015 09:45 AM

WAYF today has launched its new metadata handler, PHPH, acronymic for PHederation PHeeder.

Metadata I/O system

PHPH basically reads a number of metadata sources and from these generates a set of metadata feeds needed for the operation of WAYF.

Interfederation customs clearance

The WAYF Secretariat through PHPH's web interface has the ability to edit interfederation metadata, enforcing WAYF constraints on entities imported into WAYF as well as eduGAIN and Kalmar2 policies on WAYF entities being exported.

Metadata explorer

Each feed and entity handled by PHPH can be explored through the system's browser interface, publicly avaliable here, both as XML and in a “flat” format. Features include entity search field and filters and a graphical overview of the feeds involved and their interrelations. Public access is read-only.

Watch it in action

Watch our brief intro video here!

WAYF NewsAarhus School og Marine and Technical Engineering now a WAYF member [Technorati links]

April 30, 2015 08:50 AM
Aarhus School of Marine and Technical Engineering (‘AAMS’) has just joined WAYF. Consequently, users here now have the ability to access WAYF-connected web services using their AAMS login credentials.

WAYF NewsAarhus School of Marine and Technical Engineering now a WAYF member [Technorati links]

April 30, 2015 08:50 AM
Aarhus School of Marine and Technical Engineering (‘AAMS’) has just joined WAYF. Consequently, users here now have the ability to access WAYF-connected web services using their AAMS login credentials.

KatasoftHow to Write Middleware for Express.js Apps [Technorati links]

April 30, 2015 04:39 AM

Express.js is a lightweight HTTP framework for node.js that allows you to create a variety of applications, from a standard website to a REST API. It gets out of your way with a minimal API that you fill in with your custom needs.

The structure of ExpressJS is this: everything is “middleware”. If you’ve built an Express app, you’ve probably seen code like this:

app.use(bodyParser())
app.use(cookieParser())

This code wires middleware to your application. So what is Middleware?

What is Middleware?

Middleware is a function that receives the request and response objects of an HTTP request/response cycle. It may modify (transform) these objects before passing them to the next middleware function in the chain. It may decide to write to the response; it may also end the response without continuing the chain.

In other frameworks “middleware” is called “filters”, but the concept is the same: a request, response, and some transformation functions.

A very simple middleware function looks like this:

function logger(req,res,next){
  console.log(new Date(), req.method, req.url);
  next();
}

This is middleware at its simplest: a function with a signature of (req, res, next). In this particular example, a simple logger prints some information about these requests to the server console, and then continues the chain by calling next().

The job of Express is to manage your chain of middleware functions. All middleware should achieve three things:

The “Hello, World!” of Middleware

As you write your own middleware you will run into some pitfalls, but fear not! I will cover them in this article, so you know if you’ve fallen into one. :)

In our example application, we want two simple things to happen:

Let’s get started! Here is a basic shell of an Express application:

var express = require('express');

var app = express();

var server = app.listen(3000);

This Express application doesn’t do anything by itself; you need to add some middleware! We’ll add our logger (the one you saw in the introduction):

app.use(logger);

Good to go, right? When we run our server and make a request of it (using Curl in the terminal), we should see a log statement printed in the server console. But if you try, you’ll see this from your request:

$ curl http://localhost:3000/
Cannot GET /
$

And this from your server:

Mon Mar 23 2015 11:05:04 GMT-0700 (PDT) 'GET' '/'

We saw the server logging, but Curl gets a ‘Not Found’ error. What happened?

Pitfall #1 – “Not Found” means “Nothing else to do”

While we did the good thing of calling next(), no other middleware has been registered and there is nothing else for Express to do!

Because we have not ended the response, and there are no other middleware functions to run, Express kicks in with a default “404 Not Found” response.

The Solution: end your responses via res.end(). We’ll cover that in a later section.

Saying Hello

We’re going to write a new middleware function, named hello, and add this to our app. First, the function:

function hello(req,res,next){
  res.write('Hello \n');
  next();
}

Then add it to your middleware chain:

app.use(logger);
app.get('/hello',hello);

Notice the difference between use and get? These mean different things in Express.js:

So what happens when we make a request for /hello? Here’s what Curl would do:

$ curl http://localhost:3000/hello
Hello

And what the server logs:

Mon Mar 23 2015 11:23:59 GMT-0700 (PDT) 'GET' '/hello'

Looks okay, right? But if you actually try this, you’ll notice that your Curl command never exits. Why? Read on…

Pitfall #2 – Not Ending The Response

Our hello middleware is doing the right thing and writing “Hello” to the response, but it never ends the response. As such, Express will think that someone else is going to do that, and will sit around waiting.

The solution: You need to end the response by calling res.end().

We wanted to say “Bye” as well, so let’s create some middleware for that. In that middleware we will end the response. Here’s what the bye middleware looks like:

function bye(req,res,next){
  res.write('Bye \n');
  res.end();
}

And now we’ll add it to the chain:

app.use(logger);
app.get('/hello',hello,bye);

Now everything will work the way we want! Here is what Curl gives us:

$ curl http://localhost:3000/hello
Hello
Bye
$

And the server will log our requests as well:

Mon Mar 23 2015 11:23:59 GMT-0700 (PDT) 'GET' '/hello'

Middleware: Mix and Match

When you create simple middleware functions with single concerns, you can build up an application from these smaller pieces.

Here’s a hypothetical situation: let’s say we’ve setup an easter-egg route on our server, named /wasssaaa (rather than hello). Of course, we want to know how many people hit this route, because that’s really quite interesting. But we don’t want to know if someone is hitting the hello route, because that’s not very interesting.

Besides, our marketing team can tell us that information from their analytics system (but they don’t know about our easter egg! Ha!)

We would re-write our midddleware wiring to look like this:

app.get('/hello',hello,bye);

app.get('/wasssaaa',logger,hello,bye);

This removed the global app.use(logger) statement, and added it to just the /wasssaaa route. Brilliant!

Express,js Routers

Express.js has a feature called Routers – mini Express applications that nest within each other. This pattern is a great way of breaking up the major components of your application.

A typical situation is this: you have a single Express server, but it does two things:

The code and dependencies for the web pages are drastically different from the API service, and you typically divide up that code. You can also divide them into separate Express routers!

Hello World, with Routers

Following the situation above, let’s build a simple server that says Hello! to everyone on our website AND our API, but only logs requests for the API service. That simple app might look like this:

var app = express();

var apiRouter = express.Router();

apiRouter.use(logger);

app.use(hello,bye);

app.use('/api',apiRouter);

We’ve separated our API service by defining it as an Express router, and then attaching that router to the /api URL of our main Express application.

What happens when we run this? Here’s what Curl reports:

$ curl http://localhost:3000/
Hello
Bye

$ curl http://localhost:3000/api/hello
Hello
Bye
$

Looking good, right? But what does the server have to say?

...

Hmm.. it doesn’t say anything! What happened??

Pitfall #3: Ordering of Middleware is Important

Express.js views middleware as a chain, and it just keeps going down the chain until a response is ended or it decides there is nothing left to do. For this reason, the order in which you register the middleware is very important.

In the last example we registered the bye middleware before we attached the apiRouter. Because of this, the bye middleware will be invoked first. Our bye middleware ends the response (without calling next), so our chain ends and the router is never reached!

The Solution: Re-order your middleware. In this situation, we simply need to register the apiRouter first:

app.use('/api',apiRouter);

app.use(hello,bye)

With this, we now get what we expect. Curl reports the same as before:

$ curl http://localhost:3000/
Hello
Bye
$ curl http://localhost:3000/api/hello
Hello
Bye
$

But now our server logs the API request:

Mon Mar 23 2015 14:48:59 GMT-0700 (PDT) 'GET' '/hello'

But notice something curious here: even though we have requested /api/hello, the logger reports /hello. Hmm…

Pitfall #4: URLs Are Relative to a Router

A router doesn’t really know that it’s being attached to another application. If you really need to know the exact URL, you should use req.originalUrl. Or if you’re curious about where this route has been attached, use req.baseUrl (which would be /api in this example).

Debugging Middleware

In this article I’ve shown you some of the common pitfalls with Express.js middleware. This certainly will help you as you write your own middleware, but what about all that third party middleware that you put into your Express application?

If you’re using 3rd party middleware and it’s not working as expected, you’re going to have to stop and debug.

First, look at the source code of the module. Scan it for places where it accepts (req,res,next) and look for places where it calls next() or res.end(). This will give you a general idea of what the flow control looks like, and might help you figure out the issue.

Otherwise, check out Node Inspector and go down the step-by-step debugging path.

So Much Middleware, So Little Time!

I’ll wrap this article with this advice: there’s a lot of middleware out there! Start browsing NPM for great node.js middleware! If you want some inspiration, here are some we love:

And of course (we’re a little biased), but we think Stormpath Express will be a great addition to your Express.js application. It handles all the authentication and user management for your application, so you don’t need to build out user infrastructure from scratch. It takes about 15 minutes to get a full authentication system for Express with this tutorial – including all the login screens.

With that.. happy coding! Feel free to leave comments and questions below. :)

   

Matthew Gertner - AllPeersSo What Can The Apple Watch Really Do? [Technorati links]

April 30, 2015 03:40 AM

Soon enough, you'll be able to sync your Apple Watch with your iPhone to make some amazing magic happen ... photo by CC user janitors on flickr

There has been a lot of rumors surrounding the newest creation at one of the world’s most influential tech firms. The biggest question of all however, is simply this: what can the Apple Watch really do? This post will share just a few ways this souped up wrist watch will change your life.

It can…

photo by CC user HLundgaard on wikimedia

help you pay for your coffee

Over the past few years, Apple has been getting into the swipe-to-pay market that the creation of near field communication has spawned in the marketplace. Intended to compete with Square (their main adversary in this space), Apple Pay is now accepted at retailers like Whole Foods, McDonald’s and Walgreens.

While users currently have to lug out their smartphones and book up the app in order to complete this function, you will be able to simply swipe your Apple Watch across the pay pad, saving you time while you safely keep your expensive iPhone snugly in your pocket.

photo by CC user 72098626@N00 on Flickr

track your fitness stats

With people becoming acutely aware of the health risks surrounding a sedentary lifestyle, more of them are becoming motivated to finally get their butts off the couch in an effort to get healthy. While this alone is laudable, there are many pitfalls along the way that derail one’s efforts.

By tracking fitness stats, one can see how they have improved over weeks and months, thus providing a positive feedback cycle of motivation that can keep one on the path to optimal fitness. The Apple Watch will be the most compact fitness tracking device that has hit the market to date, and seeing how it contains not only GPS connectivity that can suss out how many calories that you have burnt, but also a heart rate monitor, it has the potential to change the lives of innumerable people.

photo by CC user wonderlane on flickr

turn your lights off when you forget to do so

These lives, our lives are so hectic that we can forget to do the most mundane things. With the internet of things embedding itself in an increasing number of homes, there are apps coming to the Apple Watch that will allow to remotely control aspects of your household, such as heat and lights.

Has it been an exceptionally cold commute home? Crank the heat so that your house will be toasty warm when you walk through the door. Rushed out with an overdue work project on your mind, and forgot to turn off the lights in your living room? An app can alert you to this fact once you’ve left, allowing you to turn them off with one press of a button on your Apple Watch.

The post So What Can The Apple Watch Really Do? appeared first on All Peers.

Matthew Gertner - AllPeersWhere to go in 2015 to get a bang for your buck! [Technorati links]

April 30, 2015 03:39 AM

With scores of unspoiled beaches, Cambodia tops the list of where to go in 2015 to get a bang for your buck ... photo by CC user happytimeblog on flickr

While you are looking to travel internationally for the first time since the economic crisis hit, you are still in the process of rebuilding your financial reserves. As such, it is important to know where to go in 2015 to get a bang for your buck. Fortunately, due to the dollar’s rising strength and an abundance of amazing places, there are many outstanding choices…

photo by CC user Bjørn Christian Tørrissen on wikimedia

1) Cambodia

With infrastructure improving markedly with each passing year, Cambodia has become increasingly accessible to foreign visitors. This is excellent news, as not only does it boast a Wonder of the World in the ruins at Angkor Wat and perfect beaches on the islands that lie off its coast, but prices here are among the cheapest in a region known for cheap prices. Think 50 cent happy hours for the local beer, $10 hotel rooms and Western meals that rarely break $5 a plate, drinks included.

photo by CC user ADD on pixabay

2) South Africa

While South Africa is the most developed nation on the massive continent upon which it sits, prices throughout the country are markedly cheaper than comparable nations elsewhere. A meal at a mid range restaurant costs $29 in Cape Town, whereas it would cost $75 in New York City.

Similarly, a bottle of wine: $15 in the Big Apple, $5 in South Africa. A survey of hotel prices show that it is rare to find a three star property that costs over $100 a night, and with the increase in the value of the American dollar worldwide, this diverse destination is looking more attractive than ever.

photo by CC user 57124063@N03 on Flickr

3) Czech Republic

Europe is usually typecast as an expensive place to travel, but all one has to do to save money when visiting the Old World is to head south and east. If you adhere to this advice, ensure that the Czech Republic figures in your plans, as this Central European nation is incredibly affordable.

This is hard to believe that when you flip through photos of Prague on the internet, but there are numerous pubs where you can get pints of many local brands of beer for less than $1.50USD, as well as abundant hotel rooms for less than $60 USD (including the unique boat hotel that one should try for at least one evening during their stay here).

photo by CC user chensiyuan on wikimedia

4) Guatemala

While this country does have a bit of a bad reputation when it comes to crime, those not willing to allow media stoked fears to rule their lives will be rewarded by a series of rich experiences during their time in Guatemala.

From the extensive Mayan ruins at Tikal, to the unreal beauty of Lake Atitlan, the only thing that will take away your breath more will be the low prices that you’ll have pinch yourself before paying them. From cute casitas that are perched along the steep slopes of Guatemala’s most famous volcanic lake, to feasts in gorgeous restaurants housed in colonial buildings in Antigua, you will be paying a fraction of what you would expect if you were back your first world home.

The post Where to go in 2015 to get a bang for your buck! appeared first on All Peers.

April 29, 2015

CourionGartner VP Lori Robinson to Share Insight at CONVERGE May 21 [Technorati links]

April 29, 2015 01:18 PM

Access Risk Management Blog | Courion

Lori Robinson, Research Vice President in Identity & Privacy SLori Robinson squaretrategies for Gartner, the leading IT research firm, will present on Thursday May 21st at 5:00 p.m. at CONVERGE, the Courion customer conference. Lori’s practice focuses on the needs of information security professionals – people just like you. Prior to Gartner, Lori was a Senior Analyst with the Burton Group and was previously a Project Manager at Novell.

CONVERGE provides a unique opportunity for you to network with and learn from your peers, CONVERGE Las VegasCourion executives and partners. if you are planning on joining us, don’t hesitate a moment longer! Register today to take advantage of a $100 discount. The reduced rate disappears on Thursday April 30th so don’t delay!

blog.courion.com