September 26, 2016

KatasoftHow to Gracefully Store User Files [Technorati links]

September 26, 2016 01:17 PM

When you build a web application, one thing you may need to think about is how you plan to store user files.

If you’re building an application that requires users to upload or download files (images, documents, receipts, etc.) — file storage can be an important part of your application architecture.

Deciding where you’ll store these files, how you’ll access them, and how you’ll secure them is an important part of the engineering process, and can take quite a bit of time to figure out for complex applications.

In this guide, I’m going to walk you through the best ways to store files for your users if you’re already using Stormpath to handle your user storage, authentication, and authorization.

If you aren’t already using Stormpath—are you crazy?! Go sign up and start using it right now! It’s totally free (unless you’re building a large project) and makes building secure web appications, API services, and mobile apps wayyy simpler.

Where Should I Store Files?

When building web applications, you’ve got a few choices for where to store your files. You can:

  1. Store user files in your database in a text column, or something similar
  2. Store user files directly on your web server
  3. Store user files in a file storage service like Amazon S3

Out of the above choices, #3 is your best bet.

Storing files in a database directly is not very performant. Databases are not optimized for storing large blobs of content. Both retrieving and storing files from a database server is incredibly slow and will tax all other database queries.

Storing files locally on your web server is also not normally a good idea. A given web server only has so much disk space, which means you now have to deal with the very real possibility of running out of disk space. Furthermore, ensuring your user files are properly backed up and easily accessible at all times can be a difficult task for even experienced engineers.

Unlike the other two options, storing files in a file storage service like S3 is a great option: it’s cheap, your files are replicated and backed up transparently, and you’re also able to quickly retrieve and store files there without taxing your web servers or database servers. It even provides fine-grained control over who can access what files, which allows you to build complex authorization rules for your files if necessary.

For storing what can sometimes be sensitive information, a file storage service like Amazon S3 is a great way to get the best of all worlds: availability, simplicity, and security.

To sign up for an Amazon Web Services (AWS) account, and to start using Amazon S3, you can visit their website here.

How Do I Store Files in S3?

Now that we’ve talked about where you should store your user files (a service like Amazon S3), let’s talk about how you actually store your files there.

When storing files in S3, there are a few things you need to understand.

Firstly, you need to pick the AWS region in which you want your files to live. An Amazon region is basically a datacenter in a particular part of the world.

Like all big tech companies, Amazon maintains datacenters all over the world so they can build fast services for users in different physical locations. One of the benefits to using an Amazon service is that you can take advantage of this to help build faster web applications.

Let’s say you’re building a website for Korean users. You probably want to put all of your web servers and content in a datacenter somewhere in Korea. This way, when your users visit your site, they only need to connect over a short physical distance to your web server, thereby decreasing latency.

Amazon has a list of regions for which you can store files in S3 on their website here.

The first thing you need to do is use the list above to pick the most appropriate location for storing your files. If you’re building a web application that needs to be fast from all over the world: don’t worry, just pick the AWS region closest to you — you can always use a CDN service like Amazon Cloudfront to optimize this later.

Next, you need to create an S3 bucket. An S3 bucket is basically a directory for which all of your files will be stored. I usually give my S3 buckets the same name as my application.

Let’s say I’m building an application called “The Greatest Test App”—I would probably name my S3 bucket: “the-greatest-test-app”.

S3 allows you to create as many buckets as you want, but each bucket name must be globally unique. That means that if someone else has already created a bucket with the name you want to use: you won’t be able to use it.

Finally, after you’ve picked your region and created your bucket, you can now start storing files.

This brings us to the next question: how should you structure your S3 bucket when storing user files?

The best way to do this is to partition your S3 bucket into user-specific sub-folders.

Let’s say you have three users for your web application, and each one has a unique ID. You might then create three sub-folders in your main S3 bucket for each of these users — this way, when you store user files for these users, those files are stored in the appropriately named sub-folders.

Here’s how this might look:

bucket
├── userid1
│   └── avatar.png
├── userid2
│   └── avatar.png
└── userid3
    └── avatar.png

This is a nice structure because you can easily see the separation of files by user, which makes managing these files in a central location simple. If you have multiple processes or applications reading and writing these files, you already know your what files are owned by which user.

How Do I “Link” Files to My User Accounts?

Now that you’ve seen how to store files in S3, how do you ‘link’ those files to your actual Stormpath user accounts? The answer is custom data.

Custom Data is a essentially a JSON store that Stormpath provides for every resource. This JSON store allows you to store any arbitrary JSON data you want on your user accounts. This is the perfect place to store file metadata to make searching for user files simpler.

Let’s say you have just uploaded two files for a given user into S3, and want to store a ‘link’ to those files in your Stormpath Account for that user. To do this, you will insert the following JSON data into your Stormpath user’s CustomData resource:

{
  "s3": {
    "some-file.txt": {
      "href": "https://s3.amazonaws.com/<bucket>/<userid>/some-file.txt",
      "lastModified": "2016-09-19T17:59:22.364Z"
    },
    "another-file.txt": {
      "href": "https://s3.amazonaws.com/<bucket>/<userid>/another-file.txt",
      "lastModified": "2016-09-19T17:59:22.364Z"
  }
}

This is a nice structure for storing file metadata because it means that every time you have the user account object in your application code, you can easily know:

This JSON data makes it much easier to build complex web applications, as you can seamlessly find your user files either directly from S3, or from your user account. Either way: finding the files you need is no longer a problem.

How Do I Secure My Files?

So far we’ve seen how you can store files, link them to your user accounts, and manage them.

But now let’s talk about how you can secure your user files.

Security is a large issue for sensitive applications. Storing medical records or personal information can be a huge risk. Ensuring you take the proper precautions when working with this type of data will save you a lot of trouble down the road.

There are several things you need to know about securely storing files in Amazon S3.

First: let’s talk about file encryption.

S3 provides two different ways to encrypt your user files: server side encryption and client side encryption

If you’re building a simple web app that stores personal information of some sort, you’ll want to use client side encryption. This is the most “secure” form of file storage, as it requires you (the developer) to encrypt the files on your web server BEFORE storing them in S3. This means that no matter what happens, Amazon (as a company) can not possibly decrypt and view your stored files.

On the other hand, if you’re building an application that doesn’t require the utmost (and usually more complicated) client side encryption functionality S3 provides, you can instead use the provided server side encryption technology. This technology allows Amazon to theoretically decrypt and read your files, but still provides a decent amount of protection against many forms of attacks.

The next thing you need to know about are your file permissions, also known as ACLs. The full ACL documentation can be found here.

The gist of it is, however, that when you upload files to S3, you can tell Amazon to give your files certain permissions.

You can say things like:

Using Amazon ACLs you can create a very fine-grained amount of control over who has access to what files, and for how long: it is an ideal system for building secure applications.

A general rule of thumb is to only grant file permissions when absolutely necessary. Unless you’re building a public image hosting service, or storing files that are meant to be publicly accessible always (like user avatars), you’ll probably want to lock your files down to the maximum extent possible.

Putting It All Together

Now that we’ve covered all the main things you need to know to securely store user files with for your user accounts with S3, let’s do a quick review of what we’ve learned.

Store All User Files in a Sub-Folder of Your S3 Bucket

When storing user files, keep them namespaced by user IDs in your S3 bucket. This way, you can easily distinguish between user files when looking at them from your storage service alone.

Store File Metadata in Your User Account’s Custom Data Store

Use Stormpath’s Custom Data store to store all user file metadata. This way you have a single, simple place to reference all of your file data from your user account alone.

If you’re not using Stormpath to store your user accounts: you’ll want to build something similar.

Encrypt Files on S3

If you’re building a sensitive application: use client-side encryption to encrypt your files before storing them in S3. This will keep them really safe.

If you’re not building a sensitive application, use Amazon’s server-side encryption to help alleviate various security concerns. It’s not as secure as client-side encryption, but is better than nothing.

Set Restrictive ACLs for Your Files

Finally, be sure to only grant the minimal necessary permissions you need for each file you store. This way, files are not left open or accessible to people who shouldn’t see them.

And… That’s it! If you follow these rules to storing user files, you’ll do just fine.

Got questions? Drop me a line or tweet @ me!

The post How to Gracefully Store User Files appeared first on Stormpath User Identity API.

September 24, 2016

Matthew Gertner - AllPeersSpeed Off with Better Internet Service [Technorati links]

September 24, 2016 06:18 PM
Those seeking Better Internet Service often look for breakneck download speedsPhoto by CC user criminalintent on Flickr

Given the amount of time millions of Americans spend daily on the Internet, it should probably not come as a major surprise that many of them want (and demand for that matter) the best high-speed options out there.

Whether one uses the Internet for pleasure, business, perhaps both, being stuck with slow Internet speed is akin to watching a movie with tons of commercials. Face it; most people are not going to like that idea.

That said Internet users in search of finding high speed Internet should shop around for the best deals out there, looking for a provider able to offer the fast Internet service at a reasonable price.

So, are you ready to speed off with better Internet service?

Get Connected with the Best Deal Out There

For you to be able to get top-rate Internet service at a price that is reasonable here are a couple of means to go about it:

Is Bundling the Right Call?

Once you have settled on an Internet service provider, make sure you do all you can to lock-in as much savings as possible.

One way to go about this is by bundling your needs.

For example, various Internet service providers will offer bundled packages (Internet, television, phone etc.) for a set monthly rate.

If you think that bundled packages are just some gimmick that most or all providers throw at consumers, think again.

As an example, say you pay $180 a month for your three main modes of information and entertainment (Internet, television, phone). Internet service is $50 a month; television is $90 a month, while phone service comes in at $40.

Now, what if you could pay a monthly fee of $140 for all three when they are bundled? Over a year’s time, you would save some $480.

Another important aspect in determining just what your needs are is how much usage you get out of the Internet, your television, not to mention your phone.

If you are not watching much television, you could decide to cut out that expense and switch to the Internet to view live streaming and/or watch videos when all is said and done.

Lastly, any service provider you decide to go with must provide stellar customer service.

Stop for a moment and think about how you could be more than a little upset if your Internet, television, even your phone service is down for a prolonged period of time. The last thing you want to have happen is calling your Internet provider (once you have a phone to do that), only to be told they will get someone out there to look at the problem in a couple of days.

When you’re spending good money for what you believe are good services, you want to make sure you receive your money’s worth.

That simply translates into getting customer service that is second to none.

If you find your Internet service provider is not dialed-in to delivering such sound customer service, start to look around at some other potential options out there. Yes, it might seem like a hassle, but you should get what you pay for.

When it comes to finding the best Internet service on the market, put some speed into your efforts.

The post Speed Off with Better Internet Service appeared first on All Peers.

September 23, 2016

Matthew Gertner - AllPeersHow to Cut Office Overheads and Still Have a Prestigious Office [Technorati links]

September 23, 2016 05:33 AM

Ever imagined your office operating out of a plush Macquarie Place address, handling meetings like a pro and managing your team of hundred with ease and aplomb? While your business might not quite be at the stage of multi-level hires and exciting expansion it doesn’t mean that you have to give up the dream of a glorious Sydney business address. The costs of running a business are nothing to be sneezed at, and if you ask any business owner they will tell you that their two biggest costs are staffing and their office space. Staff is something that you kinda need to have – but what if I told you that you could manage your business office at a fraction of the costs of a fixed commercial lease, AND that you could have the prestigious address that you always wanted? You might tell me I was crazy – but I’m not. I know the secret that can help, and it’s simple: a virtual office. A Virtual Office in Sydney includes a CBD address – and when you consider the decreased costs of having a virtual office compared to renting a commercial property it makes plenty of sense.

corneroffice

I want to go over the benefits of having a virtual office, and then explore some of the other things that you can do to cut overheads with your business.

A virtual office

The decision to invest in a virtual office is a great boon for your business – let’s take a look why. You can get all of the benefits of a full-service office without having to commit to a lease. You can even get a month-to-month rental of a virtual office, which is perfect if you need to be in town for a particular event or engagement and need the perks of an office without the hassle of having to manage all of the overheads. Why not save the money you would otherwise spend on an office and put it towards something that will grow the value of your company instead? Make an investment into your future.

Limit your overheads and expenses 

Figuring out how to best manage your client relationship is important, and is something that shouldn’t be skimped on. That said, the value that can be gained from a face to face meeting isn’t necessarily the most important thing, and oftentimes you can get the same benefit from having a Skype meeting. Think about what is the most important thing for your business and act accordingly. Figure out what kind of entertainment policy works for your business and stick to it.

Harness the art of telecommunication
We are very lucky to live in this digital world, and it makes sense to harness the power of it for the best benefits for your business. You don’t need admin people, you don’t need a full time receptionist on your staff, and you can reduce the costs of office space simply by linking your team with the internet. Easy!

Save on office expenses

A big expense for your business is often the paper and other office supplies that you find yourself using. When you switch to a remote staff and a virtual office you will notice a severe decline in the costs from running an office. There is a huge amount of paper wasted every day in the office, and if you can figure out how to cut this cost for your business then you’ll be on the right track to success.

Have you figured out any other ways to save on your office overheads and keep your prestigious address? If you know of any, feel free to let us know!

The post How to Cut Office Overheads and Still Have a Prestigious Office appeared first on All Peers.

September 22, 2016

Matthew Gertner - AllPeersImprove Your Lawn & Soil with Top-Dressing [Technorati links]

September 22, 2016 06:05 PM
Improve Your Lawn with top dressingPhoto by CC user evolutionx on Pixabay

Ideally you wouldn’t have to do this in the first place. By laying quality sand and soil at the beginning before laying lawn down will hopefully avoid this process altogether. But if you want to improve your lawn and soil, then continue reading.

A healthy lawn requires healthy soil, but it’s often difficult with an already established lawn in place. This is where top-dressing comes in. Top dressing can help with:

Top-dressing is a method that gradually improves soil over time. As the soil breaks down , it filters through the existing soil – improving its texture and general health.

Ideal Time to Consider Top-Dressing

Autumn is the ideal time to consider top-dressing your lawn. This gives your grass some time to grow through three to four mowing’s and before the peak of summer and winter hit. Top-dressing can all be done at once or in stages, this entirely depends on you. Some like to plug away at it and get little bits of soil delivered at a time, whilst others prefer to order a big truckload and do it all at once. Either way the choice is yours. Here you will find more on the top-dressing process.

How Often Should You Top-Dress?

This really depends on your lawn and location of your home. Troublesome areas may require more attention and repeated application, however you still don’t need to do every year. The reason being is that each time you top-dress you are adding moer soil, which over time raises your grade and can affect thatch breakdown and therefore overall soil ecology. Therefore, it is essential to not go overboard. A good approach is to plan ahead. More frequent but lighter applications for troublesome yards will go a lot further as opposed to one deep application. For overall organic soil amendment, a very light application of top-dressing brushed into aeration holes can improve the soil without raising the grade.

The post Improve Your Lawn & Soil with Top-Dressing appeared first on All Peers.

KatasoftApache Shiro Stormpath Integration 0.7.1 Released [Technorati links]

September 22, 2016 01:11 PM

Welcome to the new Apache Shiro Stormpath integration! This new release features a servlet plugin, plus deeper support for Spring and Spring Boot. Until now, we have only had a basic Apache Shiro realm for Stormpath. While sufficient, this basic realm never granted access to the full suite of Stormpath services. Today, that changed!
Shiro + Stormpath

Servlet Plugin

You can still use the Stormpath realm the same way you are using it today, but if you switch to the new servlet plugin you also get all of the great features you have come to expect from Stormpath, along with the benefit of having the Shiro realm created and configured for you automatically. Just drop in the dependency:

<dependency>
    <groupId>com.stormpath.shiro</groupId>
    <artifactId>stormpath-shiro-servlet-plugin</artifactId>
    <version>0.7.1</version>
</dependency>

When migrating to the servlet plugin there are a few things to keep in mind:

  • You can remove the Shiro configuration in your web.xml
  • You have the option of making Shiro stateless
  • Logouts are now a POST request
  • I’ve taken the one of the original Stormpath + Apache Shiro examples and updated it to use the stormpath-shiro-servlet-plugin as a migration guide.

    Stormpath Loves Spring

    I have created Spring Boot starters for web and non web applications, as well as examples to help get you started.

    All you need to do is drop in the correct dependency:

    <dependency>
        <groupId>com.stormpath.shiro</groupId>
        <artifactId>stormpath-shiro-spring-boot-web-starter</artifactId>
        <version>0.7.1</version>
    </dependency>

    These work in conjunction with the existing Stormpath Spring modules, if you are already familiar with them, you will have no problem getting started.

    What Else?

    As if Servlet and Spring Boot Starters weren’t exciting enough, this Shiro release include a TON of other new features, like:

  • Single sign on: Support for Stormpath’s SSO service ID Site can be enabled with a single property
  • Built in login page: One less thing to worry about
  • Social login: Login and registration support for popular social providers like Google, Facebook, Linkedin, and Github
  • User registration and forget password workflows: Out of the box user management
  • Drop in servlet plugin: Just add the dependency, and forget about messing with your web.xml
  • Spring Boot starters: Both web and non-web applications work in conjunction with the Stormpath Spring Boot starters
  • Token authentication: Stateless and signed JWTs
  • New simple examples to help you get started integrating with your servlet based, Spring Boot, or standalone application
  • Better documentation
  • Giving Back to Apache Shiro

    Stormpath is committed to improving Apache Shiro, that is the big reason why I joined Stormpath in the first place. Over the next few weeks I will be delivering on a few of our more exciting promises, including: a Servlet 3.x support, improved Spring and Spring Boot support, and Guice 4.x support.

    Learn More

    To learn more about Apache Shiro, subscribe to the mailing lists, or check out the documentation. Ready to give Shiro or Stormpath a try? These awesome tutorials will get you started:

  • Hazelcast Support in Apache Shiro
  • Tutorial: Apache Shiro EventBus
  • A Simple WebApp with Spring Boot, Spring Security, & Stormpath — in 15 Minutes
  • Secure Connected Microservices in Spring Boot with OAuth and JWTs
  • Secure Your Spring Boot WebApp with Apache and SSL in 20 Minutes
  • The post Apache Shiro Stormpath Integration 0.7.1 Released appeared first on Stormpath User Identity API.

    Kantara InitiativeReal Consent Workshops: The Consent Tech Bubble Grows [Technorati links]

    September 22, 2016 01:02 AM

    By Mark Lizar and Colin Wallis

    It’s been humbling to see the growth, interest and awareness in consent tech over the last eight months and it is exciting to have Kantara right in the middle of it all.

    Over a year ago, the Kantara Initiative: Consent & Information Sharing Work Group, proposed a collaboration with the Digital Catapult, Personal Data & Trust Network (http://pdtn.org), Consent Work Group and started a one-year plan to create awareness in Consent Tech.

    To achieve this collaboration a series of five ‘Real Consent Workshops’ were facilitated. Curated by Kantara and Digital Catapult experts, the workshops delved into what would make consent and trust scale with people and personal data. Pretty exciting, groundbreaking stuff!

    The Real Consent Workshops looked at the gap between consent people find meaningful and what we have online today. With experts in various fields presenting on various topics. (for background and blog posts see http://real-consent.org)

    Since the first event there has been an incredible surge of interest. As both new laws for consent in the EU were announced and laws for the transfer of personal information were created with Privacy Shield. Both these actions have served as a catalyst for the Consent Tech market. As a result we have seen the conversation about real consent evolve into a call for summer consent tech projects. This is great news for Kantara.

    Now, the Personal Data and Trust Network (Consent Work Group) is holding a PDTN exclusive event on Sept 26th, to explore and discuss all of the great new consent tech new laws and regulations. We are going and we hope to see you there.

    This growing awareness and activity around consent tech is particularity gratifying. Kantara has long been associated with and active in consent tech. With our series of Real Consent Workshops we are once again taking a leading position in the industry.

    We are curious as to your thoughts and hopes around consent tech. Would you like Kantara to hold another series of Consent Tech Workshops? Please drop us a line with a comment or two. Let’s keep the dialogue going.

    Click the below link to register for the PDTN Event,
    Consent Tech: Creating Sustainable Real Consent
    https://www.digitalcatapultcentre.org.uk/event/creating-sustainable-real-consent/

    Mark Lizar, Consultant and Integration Technical Producer for Smart Species, LTD London, mark@smartspecies.com

    Colin Wallis, Executive Director, Kantara Initiative,
    colin@kantarainitiative.org

    Matthew Gertner - AllPeersEmbrace Your Curves This Summer With A Plus Size Swimsuit [Technorati links]

    September 22, 2016 12:59 AM

    For the past decade, plus-size bikinis have been taking the world’s beaches by storm! Instead of hiding their beauty from the world, women of all shapes and sizes are sporting sexy fashions, ranging from the high-waisted bikini-bottom to the trendy tankini. This movement towards body positivity is only growing as top models such as Ashley Graham and Robyn Lawley have been trendsetting plus-size styles in the fashion industry for years – so much so that the market has stopped catering so exclusively to smaller body types, and has begun to imitate them! However, this year’s new craze is the most fun of all; finding hot deals on these hot bikinis, in every imaginable size and style online!

    A Plus Size Swimsuit is much more stylish these days

    Trotting from window to window at the mall can be entertaining but the plethora of undersized garments will leave the average shopper reeling in anger. Malls and boutique stores have all but ignored the burgeoning plus-size market. This idiocy isn’t all bad though, as it swiftly created a boon of resistance in the fashion industry and continues to shoot the perpetrators in the foot by carving out a niche for savvy shoppers, encouraging them to turn to internet distributors for the best bargains on the world’s widest varieties of plus size swimsuit. For example, retailers such as swimsuitsforall not only offer fashionable plus size swimwear, but swimwear that is transferable from the gym to the beach,and accessories such as cover-ups for that flirtatious piece to change into once you’re finished with the water.

    Gone are the days of searching hopelessly during the off-season for any bikini that will fit your top. Gone are the days of resigning yourself to buying the one, single, hideous color in the back of some overpriced hole-in-the-wall because it’s all you could find. With the mass of specialized bikini boutiques online, never again will you as a consumer fret over finding her perfect size. It is now as simple as entering the information into the online order form. Now you have more joyous hours to spend scouring for that perfect color and pattern combination! Because swimsuitsforall has the best plus size swimwear, and because they cater exclusively to plus size shoppers, you can find every pattern and style you’re looking for in one place.

    It is clear that what these online bikini boutiques truly provide is freedom. For years, plus-size women have needlessly been forced into searching far-and-wide for weeks, just to settle on a bikini that was neither comfortable, attractive, or what they were looking for. As if the convenience of shopping from home and enjoying unlimited options isn’t enough, the vast majority of these plus-size-specialist websites ensure their products are perfect for the individual consumer by offering them the option of sending it back for a full refund. Such lofty degrees of competition have further driven up the quality of the average bikini. For decades, an inexpensive bikini of any size was guaranteed to rip or wear within a few trips to the pool. If the consumer wanted a swimsuit that would provide years of enjoyment, she was expected to shell out a couple of hundred dollars – and even then, it was often an “all sales are final” situation. Boutiques like swimsuitsforall.com combat this by introducing an Xtra Life Lycra collection for serious swimmers that is highly resistant to breaking or tearing.

    Indeed, thanks to the advent of internet-based, plus-size bikini boutiques, a meticulous shopper can guarantee herself a magnificent bargain on the perfect plus-size garment – in any style, any day of the year!

    The post Embrace Your Curves This Summer With A Plus Size Swimsuit appeared first on All Peers.

    September 21, 2016

    KatasoftSecurely Storing Files with Node, S3, and Stormpath [Technorati links]

    September 21, 2016 05:28 PM

    There are a lot of redundant problems you need to solve as a web developer. Dealing with users is a common problem: storing them, authenticating them, and properly securing their data. This particular problem is what we here at Stormpath try to solve in a reusable way so that you don’t have to.

    Another common problem web developers face is file storage. How do you securely store user files? Things like avatar images, PDF document receipts, stuff like that. When you’re building a web application, you have a lot of choices:

    1. Store user files in your database in a text column, or something similar
    2. Store user files directly on your web server
    3. Store user files in a file storage service like Amazon S3

    Out of the above choices, I always encourage people to go with #3.

    Storing files in a database directly is not very performant. Databases are not optimized for storing large blobs of content. Both retrieving and storing files from a database server is incredibly slow and will tax all other database queries.

    Storing files locally on your web server is also not normally a good idea. A given web server only has so much disk space, which means you now have to deal with the very real possibility of running out of disk space. Furthermore, ensuring your user files are properly backed up and easily, consistently accessible can be a difficult task, even for experienced engineers.

    Unlike the other two options, storing files in a file storage service like S3 is a great option: it’s cheap, your files are replicated and backed up transparently, and you’re also able to quickly retrieve and store files there without taxing your web servers or database servers. It even provides a fine-grained amount of control over who can access what files, which allows you to build complex authorization rules for your files if necessary.

    This is why I’m excited to announce a new project I’ve been working on here at Stormpath that I hope you’ll find useful: express-stormpath-s3.

    This is a new Express.js middleware library you can easily use with your existing express-stormpath web applications. It natively supports storing user files in Amazon S3, and provides several convenience methods for directly working with files in an abstract way.

    Instead of rambling on about it, let’s take a look at a simple web application:

    'use strict';
    
    const express = require('express');
    const stormpath = require('express-stormpath');
    const stormpathS3 = require('express-stormpath-s3');
    
    let app = express();
    
    // Middleware here
    app.use(stormpath.init(app, {
      client: {
        apiKey: {
          id: 'xxx',
          secret: 'xxx'
        }
      },
      application: {
        href: 'xxx'
      }
    }));
    app.use(stormpath.getUser);
    app.use(stormpathS3({
      awsAccessKeyId: 'xxx',
      awsSecretAccessKey: 'xxx',
      awsBucket: 'xxx',
    }));
    
    // Routes here
    
    app.listen(process.env.PORT || 3000);

    This is a bare-bones web application that uses Express.js, express-stormpath, and express-stormpath-s3 to provide file storage support using Amazon S3 transparently.

    This example initialization code requires you to define several variables which are all hard-coded above. This minimal application requires you to:

    Assuming you’ve got both of the above things, you can immediately start using this library to do some cool stuff.

    Uploading User Files

    First, let’s take a look at how you can store files for each of your users:

    app.get('/', stormpath.loginRequired, (req, res, next) => {
      req.user.uploadFile('./some-file.txt', err => {
        if (err) return next(err);
    
        req.user.getCustomData((err, data) => {
          if (err) return next(err);
    
          res.send('file uploaded as ' + data.s3['package.json'].href);
        });
      });
    });

    This library automatically adds a new method to all of your user Account objects: uploadFile. This method allows you to upload a file from disk to Amazon S3. By default, all files uploaded will be private so that they are not publicly accessible to anyone except you (the AWS account holder).

    If you’d like to make your uploaded files publicly available or set them with a different permission scope, you can easily do so by passing an optional acl parameter like so:

    app.get('/upload', stormpath.loginRequired, (req, res, next) => {
      // Note the 'public-read' ACL permission.
      req.user.uploadFile('./some-file.txt', 'public-read', err => {
        if (err) return next(err);
    
        req.user.getCustomData((err, data) => {
          if (err) return next(err);
    
          res.send('file uploaded as ' + data.s3['package.json'].href);
        });
      });
    });

    The way this all works is that all user files will be stored in your specified S3 bucket, in a sub-folder based on the user’s ID.

    Let’s say you have a Stormpath user who’s ID is xxx, and you then upload a file for this user called some-file.txt. This means that your S3 bucket would now have a new file that looks like this: /xxx/some-file.txt. All files are namespaced inside of a user-specific folder to make parsing these values simple.

    Once the file has been uploaded to S3, the user’s Custom Data store is then updated to contain a JSON object that looks like this:

    {
      "s3": {
        "some-file.txt": {
          "href": "https://s3.amazonaws.com/<bucketname>/<accountid>/some-file.txt",
          "lastModified": "2016-09-19T17:59:22.364Z"
        }
      }
    }

    This way, you can easily see what files your user has uploaded within Stormpath, and link out to files when necessary.

    The express-stormpath-s3 documentation talks more about uploading files here.

    Downloading User Files

    As you saw in the last section, uploading user files to Amazon S3 is a simple process. Likewise — downloading files from S3 to your local disk is also easy. Here’s an example which shows how you can easily download previously uploaded S3 files:

    app.get('/download', stormpath.loginRequired, (req, res, next) => {
      req.user.downloadFile('some-file.txt', '/tmp/some-file.txt', err => {
        if (err) return next(err);
        res.send('file downloaded!');
      });
    });

    As you can see in the example above, you only need to specify the filename, no path information is required to download a file. This makes working with files less painful as you don’t need to traverse directory paths.

    You can read more about download files in the documentation here.

    Deleting User Files

    To delete a previously uploaded user file, you can use the deleteFile method:

    app.get('/delete', stormpath.loginRequired, (req, res, next) => {
      req.user.deleteFile('some-file.txt', err => {
        if (err) return next(err);
        res.send('file deleted!');
      });
    });

    You can read more about this in the documentation here.

    Syncing Files

    Finally, this library provides a nice way to ensure your S3 bucket is kept in sync with your Stormpath Accounts.

    Let’s say you have a large web application where you have users uploading files from many different services into S3. This might result in edge cases where files that were NOT uploaded via this library are not ‘viewable’ because the file metadata has not been persisted in the Stormpath Account.

    To remedy this issue, you can call the syncFiles method before performing any mission critical tasks:

    app.get('/sync', stormpath.loginRequired, (req, res, next) => {
      req.user.syncFiles(err => {
        if (err) return next(err);
        res.send('files synced!');
      });
    });

    This makes building large scale service oriented applications a lot simpler.

    You can read more about the sync file support here.

    Wrapping Up

    Right now this library is available only for Express.js developers. If you find it useful, please leave a comment below and go star it on Github! If we get enough usage from it, I’ll happily support it for the other Stormpath web frameworks as well.

    If you have any questions about Stormpath, Express.js, or Amazon S3, also feel free to drop me a line!

    The post Securely Storing Files with Node, S3, and Stormpath appeared first on Stormpath User Identity API.

    Julian BondDon't eat the seed corn. [Technorati links]

    September 21, 2016 07:07 AM
    Don't eat the seed corn.

    We're going to need all the fossil fuel that's left to create a world where we don't need it any more.

    Discuss.

    http://cassandralegacy.blogspot.co.uk/2016/09/the-sowers-way-some-comments.html
     The Sower's Way: some comments »
    Image: sower by Vincent Van Gogh The publication of the paper "The Sower's way: Quantifying the Narrowing Net-Energy Pathways to a Global Energy Transition" by Sgouridis, Csala, and Bardi, has generated some debate on the "Ca...

    [from: Google+ Posts]
    September 20, 2016

    ForgeRockWhat’s Preventing Retailers from Implementing Omnichannel? [Technorati links]

    September 20, 2016 08:38 PM

    Antiquated Identity Infrastructure, Lack of Visibility Across Channels Keeping Retailers from Creating Frictionless Omnichannel Experiences for Shoppers

     

    These are challenging times for retailers. With so many shoppers preferring to make purchases online now, retailers with significant brick-and-mortar holdings struggle to understand their customers and tailor individual experiences across channels. We hear from a lot of retail organisations that are realising their fragmented, legacy identity and access management systems are a real barrier to omnichannel success, because they can’t support digital customer demands and business requirements. At the same time, there is growing awareness among retailers that they need to update their technologies to maintain customer loyalty and sustain growth.

    omnichannel Antiquated identity and access management infrastructure is a real barrier to retailers working to implement frictionless omnichannel customer experiences.

    Analyst research backs up these concerns: An Aberdeen report found that companies with omnichannel customer engagement strategies retain on average 89% of their customers, compared to 33% for companies with weak omnichannel customer engagement. Meanwhile, an Accenture report found that 94% of retailers surveyed noted significant barriers to omnichannel integration. The retailers we work with here at ForgeRock all are seeking to provide customers with more engaging, more convenient customer experiences. “Frictionless” is a word we hear often. But we’re also hearing that these visions for transforming digital experiences are falling short in reality due to numerous challenges.

    For one, the identity and access management technologies retailers have long relied upon to secure transactions are also known to create silos of customer data. With increasing customer privacy concerns and regulations regarding data protection and sharing emerging in Europe and the U.S, not surprisingly there’s a lot of uncertainty. There’s also a creeping consensus that the lack of continuous, intelligent security through the shopping journey is leading to greater risk of identity fraud and malicious attacks. Many retailers report that the inability to seamlessly connect users, devices, and things makes it difficult to onboard new customers, or enable returning customers to quickly access services or merchandise.

    Even advanced retailers with loyalty programs and fully built-out online operations struggle to create a complete view of the customer and their relationship with the brand as they move from in-store to online interactions. The Retail Gazette, the UK’s daily retail news publication, reports that while over 90% of retailers now sell online in the UK, nearly two thirds claim a lack of visibility across channels is the biggest problem they face.

    A lack of visibility across channels is what we’re hearing from our retail customers also. Many admit that their loyalty programs are great for capturing basic customer data, but that acting on that data to engage individual customers isn’t yet possible. Retailers see patterns, but can’t make connections — there’s no way to tailor offers or suggest new products that might be of interest. Because retailers lack the ability to proactively engage and personalise, the online experience is static. These problems are often rooted in the fact that many retailers have redundant identity systems and often don’t recognise the same customer is buying from multiple brands or has multiple roles ( for instance, a teacher shopping for classroom supplies one day could be a mom shopping for household cleaners the next day). Antiquated identity infrastructure can also present roadblocks on the journey from prospect to active customer. It’s far easier to get new users to signup, subscribe or purchase when customer identity and access management processes are swift, agile and friction-free.

    When you consider these challenges in the context of the fast-growing Internet of Things, you get a sense of just how daunting this all is for today’s retailers. One of the key concepts of the Connected Home is that connecting appliances, lighting, heating & cooling, etc., will enable homeowners to interact with retailers or services providers to, for instance, automatically deliver milk and groceries if your fridge is getting empty. Or send new lightbulbs when the old ones blow out. These kinds of scenarios are still in their early days (Amazon dash buttons are good example), and their success will depend very much on retailers solving their more immediate challenges – specifically, overcoming fragmented identity and access management infrastructure. In our next post, we’ll explore some of these solutions. How, when you can quickly connect new digital ecosystems, you can be positioned to maximise your revenue opportunities. And if you can deliver a customer experience that is seamless, personalised and secure, then you’ll be better equipped to grow a digital retail business and build lasting relationships with your users.

    Stuart Hodkinson is Regional Vice President, UK & Ireland at ForgeRock.

    The post What’s Preventing Retailers from Implementing Omnichannel? appeared first on ForgeRock.com.

    Neil Wilson - UnboundIDUnboundID LDAP SDK for Java 3.2.0 [Technorati links]

    September 20, 2016 07:37 PM

    We have just released the 3.2.0 version of the UnboundID LDAP SDK for Java. It is available for download via the LDAP.com website or from GitHub, as well as the Maven Central Repository.

    You can get a full list of changes included in this release from the release notes (or the Commercial Edition release notes for changes specific to the Commercial Edition). Some of the most significant changes include:

    MythicsThank You - 2016 Oracle Linux & Virtualization Partner of the Year - Oracle OpenWorld16 #OOW16 [Technorati links]

    September 20, 2016 06:04 PM

    Mythics was proud to accept the 2016 Oracle Linux and Virtualization North America Partner of the Year…

    Mike Jones - MicrosoftUsing Referred Token Binding ID for Token Binding of Access Tokens [Technorati links]

    September 20, 2016 12:14 PM

    OAuth logoThe OAuth Token Binding specification has been revised to use the Referred Token Binding ID when performing token binding of access tokens. This was enabled by the Implementation Considerations in the Token Binding HTTPS specification being added to make it clear that Token Binding implementations will enable using the Referred Token Binding ID in this manner. Protected Resource Metadata was also defined.

    Thanks to Brian Campbell for clarifications on the differences between token binding of access tokens issued from the authorization endpoint versus those issued from the token endpoint.

    The specification is available at:

    An HTML-formatted version is also available at:

    Matthew Gertner - AllPeersTop 4 Advantages of Having a Business Website [Technorati links]

    September 20, 2016 01:50 AM
    There are many huge Advantages of Having a Business Website, so hire a web developer todayPhoto by thebluediamondgallery.com and nyphotographic.com

    When you start your own business, make sure you also set up a website. This is essential for every business who wants to target a wider market. This also helps you reach out to people traditional marketing usually can’t reach and also generate brand awareness. You need to take advantage of the online world to keep your business growing.

    As a business owner of any kind of enterprise, it is crucial that you build your own online presence. If you want your business to thrive, then you must create a website for your company. In fact, a website is a powerful marketing tool that is absolutely beneficial to your business. You can hire a professional web designer to make a website for you or simply build your own by using the basic and free tools in web designing.

    A website does not require you to borrow money from lending companies like Kikka (https://www.kikka.com.au/) or other traditional banks. It is actually cheap to set up one. You have to pay for annual fees, but that is also cheap compared to other marketing platforms.

    Still not convinced on why you need to put up a website? Here are the advantages of having a website for your own enterprise:

    Having a business website is convenient.

    Customers want convenience all the time. It will be easier for them to shop your products or avail your services in the comfort of their homes if you have your own website. This way, potential customers can simply browse the things you offer online and select which one they are going to purchase. Thus, it is truly a smart move to create your website and advertise your own products and services online.

    Having a business website is cost-effective.

    Everyone knows that advertising through the use of Internet is low-cost. It wouldn’t hurt your pocket to build your own business website, so it is better to take advantage of it. By simply having a strategically developed website, you can reap its benefits later on. Although it would take some time to gain traffic to your website, it is still worth the try. Your online presence will matter in the long run, which enables you to advertise your company around the web.

    Having a business website is very accessible.

    Any website or social media accounts you have for your business are easily accessible by people from across the globe around the clock. With this, there is no need for potential customers to go to a physical store anymore to buy something. They can just access your website anywhere they are as well as any time of the day as they wish.

    Having a business website helps boost your sales.

    A website allows you to become visible worldwide. It makes you gain more customers through your online presence. Therefore, there is a greater possibility that you can generate more sales and that could mean a success to your business.

    It is truly crucial to create your own business website nowadays. To keep your venture off the ground, you’ve got to have an effective marketing tool that will definitely make a significant difference. And as you have read, these benefits mentioned above will make you realize how important the role of a website is to your entire business.

    The post Top 4 Advantages of Having a Business Website appeared first on All Peers.

    September 19, 2016

    KatasoftTutorial: Launch Your ASP.NET Core WebApp on Azure with TLS & Authentication [Technorati links]

    September 19, 2016 07:05 PM

    The use of TLS (HTTPS) to encrypt communication between the browser and the server has become an accepted best practice in the software industry. In the past, it was difficult and expensive to maintain the certificates necessary to enable HTTPS on your web application. No longer! Let’s Encrypt issues free certificates for any website through an automated mechanism.

    In this tutorial, we’ll look at how to use Let’s Encrypt to provide transport-layer security for a web application built with ASP.NET Core and running on Azure App Service. Once the transport layer is taken care of, we’ll add the Stormpath ASP.NET Core integration for secure user storage and authentication.

    To follow this tutorial, you’ll need:

  • Visual Studio 2015 Update 3 or later
  • A Stormpath account (you can register here)
  • An active Azure account
  • A custom domain name
  • It’s worth noting that the free service tier on Azure doesn’t allow for custom domain SSL (which is what Let’s Encrypt does), so this solution isn’t completely free. You can sign up for an Azure free trial and get $200 of credit, which covers everything you’ll need to do in this tutorial.

    To make it easy to use Let’s Encrypt with Azure, we’ll use the Let’s Encrypt Azure site extension, which has a detailed install guide. I’ll reference this guide later when we set up the extension.

    Let’s get started!

    Create a new ASP.NET Core application

    In Visual Studio, create a new project from the ASP.NET Core Web Application (.NET Core) template.
    visual-studio-create-project

    Next, choose the Web Application template. Make sure that Authentication is set to No Authentication — you’ll add it later with Stormpath.

    Although we are going to host the application on Azure, you don’t need to check the box to host the application in the cloud. We’ll set up the deployment to Azure when we’re ready to publish the application.
    Create your asp.net core webapp

    Once you click OK, Visual Studio will create the project for you. If you run the application right now, it will look like this:
    Visual Studio workaround

    Add Stormpath for auth

    With Stormpath you get a secure authentication service built right into your application, without the development overhead, security risks, and maintenance costs that come with building it yourself. To install the Stormpath ASP.NET Core plugin, get the Stormpath.AspNetCore package using NuGet.

    Then, update your Startup class to add the Stormpath middleware:

    // Add this import at the top of the file
    using Stormpath.AspNetCore;
    using Stormpath.Configuration.Abstractions;
    
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddStormpath(new StormpathConfiguration()
        {
            Client = new ClientConfiguration()
            {
                ApiKey = new ClientApiKeyConfiguration()
                {
                    Id = "YOUR_API_KEY_ID",
                    Secret = "YOUR_API_KEY_SECRET"
                }
            }
        });
    
        // Add other services
    }
    
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        // Logging and static file middleware (if applicable)
    
        app.UseStormpath();
    
        // MVC or other framework middleware here
    }

    The API key ID and secret strings can be generated by logging into the Stormpath console and clicking Create API Key. The API credentials will be downloaded as a file you can open with a text editor. Copy and paste the ID and secret into your Startup class.

    Note: For production applications, we recommend using environment variables instead of hardcoding the API credentials into your application. See the documentation for how to accomplish this.

    Adding Stormpath to your ASP.NET Core project automatically adds self-service login and registration functionality (at /login and /register). You can use the [Authorize] attribute to require an authenticated user for a particular route or action method. Your user identities are automatically stored in Stormpath, no database setup or configuration required!

    To learn what else you can do with Stormpath in your ASP.NET Core project, see the quickstart in the Stormpath ASP.NET Core documentation.

    Deploy to Azure

    Now that we have a basic application with user security, let’s deploy to Azure App Service. App Service is a managed hosting service that makes it easy to deploy applications without having to set up and maintain virtual machines.

    Navigate to Build > Publish and select Microsoft Azure App Service as your publishing target.

    Deploy to Azure

    If you’ve never published to Azure, you’ll be prompted to log in with your Azure credentials. After you authenticate, you’ll see a list of your current Azure App Service resources (if you have any).

    Since this is a new project, you’ll need to set up a new Resource Group and App Service instance to host it. Click on the New button to create the required resources.
    Create an app service in Azure

    In the first field, type a name for your application. The name you pick will be the temporary Azure URL of your application (in the form of .azurewebsites.net). Enter a name for the Resource Group, and click New to create a new App Service Plan (the defaults are fine).

    Once you have populated all the fields on the dialog, click Create to provision the resources in Azure. When the process is complete, the deployment credentials will be populated for you on the next step of the Publish wizard. Click the Validate Connection button to make sure everything is working.
    Create an ASP.NET WebApp

    Clicking Publish will cause Visual Studio to build your project. If there aren’t any compilation errors, your project files will be pushed up to Azure. Go ahead and try it!

    You can verify that your application is running by visiting http://yourprojectname.azurewebsites.net in a browser. So far, so good! Now we’ll use Let’s Encrypt to enable secure HTTPS connections to your application.

    Set up Let’s Encrypt for TLS

    There are a few steps to getting Let’s Encrypt set up with your Azure App Service application:

  • Upgrade your App Service plan to one that supports Server Name Indication (SNI)
  • Map a custom domain name to your application
  • Set up the prerequisites for the Let’s Encrypt extension
  • Install and configure the Let’s Encrypt extension
  • We’ll take a look at each step in detail.

    Upgrade your App Service plan

    Unfortunately, the Free tier doesn’t have support for custom certificates. You’ll need to use the Azure portal to upgrade the App Service plan to Basic (B1) or higher. You can upgrade from App Services > (your application) > Scale up (App Service plan):

    Upgrade Azure

    Pick a tier and click on Select to upgrade your plan. If you’re using the free Azure trial, the tier cost will come out of the trial credits (so you won’t be charged anything right away).

    Map a custom domain to your application

    Let’s Encrypt issues a TLS certificate for a specific domain, so you’ll need to have a domain ready. You can buy one through the Azure portal, or at a registrar like Namecheap for $10 or less.

    You’ll need to find the IP address of your App Service application, which you can find in the Azure portal at App Services > (your application) > Custom domains:

    Set up an external IP

    Using this IP address (and the assigned hostname), create these A and TXT records in the DNS record management tool of your domain registrar:

    A    *    <ip address>
    A    @    <ip address>
    A    www    <ip address>
    TXT    *    <hostname>
    TXT    @    <hostname>
    TXT    www    <hostname>

    This looks a little different in each registrar. In the Namecheap portal, it looks like this:

    Namecheap DNS records

    Once you’ve added these DNS records, you can add the hostname in the Azure portal by clicking Add hostname:

    Add a hostname

    Pick the A Record type and wait for the validation steps to occur. If the validation isn’t successful, you’ll be prompted to fix the problem. When all the checkmarks are green, click Add hostname to save the custom domain.

    It can take some time for the DNS caches across the internet to update (up to 48 hours in some cases). You can check the status of your DNS records using the dig tool, or on the web at digwebinterface.

    Set up the prerequisites for Let’s Encrypt

    The community-built Let’s Encrypt extension for Azure has a few prerequisites that must be set up. I won’t repeat these steps here because the official wiki covers them well! Jump over to How to install: Create a service principal and follow the instructions.

    One thing that tripped me up was in the Grant permissions step: the new service principal account needs to be added to both the App Service instance and the Resource Group it resides in. In both cases, select the resource (App Service or Resource Group) and open the Access control (IAM) subpanel. Click Add and follow the steps to add the service principal account as a Contributor role.

    One final note: it can take some time for the service principal permissions to populate. I had to wait almost an hour before I could continue. If you get strange errors later on when you’re configuring the extension, you may need to give it a bit more time.

    Install and configure the Azure Let’s Encrypt extension

    Now it’s time to install and configure the Azure Let’s Encrypt site extension. Go to your site’s SCM page (https://.scm.azurewebsites.net), then to Site extensions. On the gallery tab, search for “Let’s Encrypt”. Install the 32bit version and click the Restart Site button.

    After your site restarts, press the “Play” button on the extension. If you get a “No route registered for ‘/letsencrypt/'” error, try restarting the site one more time.

    The Azure Let’s Encrypt extension page should look like this:

    Azure + Let's Encrypt

    Fill out these fields:

  • Tenant – found on the More services > Azure Active Directory > Domain names screen (in the form of .onmicrosoft.com)
  • SubscriptionId – found on the App Service > Overview screen
  • ClientId – created in the previous step
  • ClientSecret – created in the preview step
  • ResourceGroupName – found on the App Service > Overview screen
  • Check the box to update the application settings. Click Next and give the extension some time to work.

    When I first tried to save the settings, I got an error (“’authority’ Uri should have at least one segment in the path…”). If you get this error, the extension wasn’t able to automatically create the required application settings keys. Uou’ll need to manually create these keys (with the values from the fields above):

  • letsencrypt:Tenant
  • letsencrypt:SubscriptionId
  • letsencrypt:ClientId
  • letsencrypt:ClientSecret
  • letsencrypt:ResourceGroupName
  • You can create these keys in App Service > (your application) > Application settings > App settings.

    When the keys are set up correctly, you’ll see a new screen after clicking Next. Pick the custom domain you want to use, enter your email address, and click Next.

    When I first did this, I got some errors about permissions. It turns out I didn’t have the service principal account added to the Resource Group as a Contributor (see the previous section). Once I did this, and gave the permissions time to propogate, the extension worked fine.

    That’s it! When you browse to https://yourcustomdomain.com, you’ll see the certificate from Let’s Encrypt in the address bar:

    Let's Encrypt

    Notice the expiration date on the certificate? Let’s Encrypt certificates are only good for 90 days before they must be renewed. Fortunately, the Let’s Encrypt extension can take care of the renewal automatically.

    Automatic certificate renewal

    When you install the extension, it sets up a WebJob that will take care of renewing the certificate every three months. You’ll need to set up a Storage account for the WebJob so it can keep track of when it needs to run.

    Set up a storage account

    Click Storage accounts on the left panel of the Azure portal, and click the Add button. Use these settings for the new Storage instance:

  • Deployment model: Resource manager
  • Account kind: General purpose
  • Performance: Standard
  • Replication: RA-GRS
  • Storage service encryption: Disabled
  • Resource group: Use existing (pick the group that contains your application)
  • Location: East US, or wherever you like
  • It’ll take a minute or two to deploy the storage account. When it shows up in the Storage accounts list, select it and open the Access keys panel. Copy the account name and one of the access key values and create a connection string that follows this format:

    DefaultEndpointsProtocol=https;AccountName=<your_storage_account_name>;AccountKey=<your_storage_account_access_key>

    Copy the connection string and navigate to App Services > (your application) > Application settings > App settings. Create two new settings called AzureWebJobsStorage and AzureWebJobsDashboard, and paste the connection string in both.

    Start the WebJob

    Select your application in the App Services list, and restart it for good measure. Then, open the WebJobs subpanel. You should see a “letsencrypt” job in the list:
    Let's Encrypt webjob

    Select the job and click Start. The status should switch to Running. You can click Logs to view the logs and verify that the Let’s Encrypt renewal task is completing without errors. That’s it! Your certificate will now be renewed indefinitely.

    Redirecting traffic to HTTPS

    With the Let’s Encrypt certificate installed, your site can be reached via HTTPS. However, it can still be reached by plain old HTTP. Ideally, you’d want to redirect anyone who hits your site over HTTP to be automatically upgraded to HTTPS.

    This can be accomplished with a small piece of custom middleware for ASP.NET Core. In your Startup class, place this at the top of the Configure method:

    if (env.IsProduction())
    {
        app.Use(async (context, next) =>
        {
            if (context.Request.IsHttps)
            {
                await next();
            }
            else
            {
                context.Response.Redirect($"https://{context.Request.Host}{context.Request.PathBase}{context.Request.Path}{context.Request.QueryString}", true);
            }
        });
    }

    Since the Let’s Encrypt certificate won’t be available locally, this middleware is only added to the pipeline if InProduction is true. (It’s possible to install a local certificate for IIS Express to use in development, but that’s a post for another day!)

    Re-publish your application to Azure App Service using Visual Studio, and try accessing your site over HTTP. You’ll automatically be redirected to HTTPS. Awesome!

    Learn more

    With free certificates from Let’s Encrypt, there’s no reason not to enable TLS on your ASP.NET Core web applications. And, Stormpath takes care of the security around user management and authentication/authorization easily. It’s a win/win!

    Are you adding Let’s Encrypt to an ASP.NET Core application that isn’t hosted on Azure? I’d love to know what platform you’re using; let me know in the comments below.

    If you’re interested in learning more about authentication and user management for .NET, check out these resources:

  • Simple Social Login in ASP.NET Core
  • Token Authentication in ASP.NET Core
  • Tutorial: Deploy an ASP.NET Core Application on Linux with Docker
  • The post Tutorial: Launch Your ASP.NET Core WebApp on Azure with TLS & Authentication appeared first on Stormpath User Identity API.

    September 16, 2016

    Vittorio Bertocci - MicrosoftAzure AD development lands on portal.azure.com [Technorati links]

    September 16, 2016 08:15 AM

    For the longest time, I watched with envy as my Azure colleagues drove their conference demos from the shiny portal.azure.com, while I had to stick with the good ol’ manage.windowsazure.com.

    Well, guess what! Yesterday we announced that the Azure AD management features are finally appearing in preview in portal.azure.com. Jeff wrote an excellent post about it, however, as it is in his nature, he focused on the administrative angle and relegated the development features to a paragraph tantamount to a footnote. That gave me enough motivation to break the blog torpor in which I’ve slid into since finishing the book, and pen for you this totally unofficial guide to the new awesome development features in portal.azure.com. Enjoy!

    Basics

    Let’s take a look at this new fabulous portal, shall we. Pop out your favorite browser and navigate to https://portal.azure.com.

    You’ll land on a page like the below.

    image

    Where is Azure AD? Click on “More services” on the left menu, and you’ll find it:

    image

    Click on it, and the next blade will open to something to this effect:

    image

    As Jeff’s post explains, the landing page offers lots of interesting insights on your Azure AD tenant, and various hooks for management actions.

    Just for kicks, let’s take a look at the Azure AD landing page in the old portal:

    image

    The first thing that jumps to the eye: the old portal shows both VibroDirectory, the Azure AD tenant tied to my Azure subscription, and OsakaMVPDirectory, a test tenant I created when I visited Japan a couple of years ago (I need an excuse to get back there…awesome place, awesome people). That’s because the user I am signed in with, vibro@cloudidentity.net, is a user (more: an admin) in both tenants.
    I can easily choose what tenant I want to manage by clicking the corresponding entry.

    How do I achieve the same effect in portal.azure.com? Simple. See that user badge on the top right corner, informing you about what user and tenant are you currently signed in with? Click on it:

    image

    Together with the usual account operations you expect to find there, you’ll also notice that all the tenants accessible by your user will be available for you to choose. Let’s see what happens if I select OsakaMVPDirectory.

    image

    Voila’! The portal changed to reflect the new tenant. As you can see, the landing page is far more barren… I’ve used that tenant just for playing a bit with Azure AD, nothing more.

    In fact, this is far more barren than you would probably expect from something displayed in an Azure portal… and here there’s the kicker: that’s because this tenant has no Azure subscription associated to it! Don’t believe me? Click on all subscriptions.

    image

    That’s right. This is huge, so let me rephrase to make sure you appreciate the implications:

    You now have a portal you can use to manage Azure AD tenants that are NOT associated to an Azure subscription.

    The office developers among you are probably jumping up and down right now Smile go ahead, try it! Navigate to portal.azure.com and sign in with your office dev account for your Office tenant, I’l wait. See? that’s awesome!

    Now, don’t get me wrong. Having Azure AD capabilities alongside all the other Azure services you are using in your solution is a huge advantage in itself and I am in no way trying to minimize that. I am just excited that the Azure AD development portal capabilities are no longer strictly subordinated to that.

    Enough of this – let’s take a look at the meat of the developer features: application creation and editing.

    App creation and editing

    Let’s go back to the Azure AD landing page on portal.azure.com. Where are the developer features? If you thought “Enterprise applications” – sorry, no bonus. The developer features are all available behind the sibylline moniker “App registrations”. Click on it, and you’ll find yourself on the following blade.

    image

    Those are all the apps created in this tenant – that is applications for which the Application entity resides on this very tenant.
    Let’s compare with the same view on the old portal.

    image

    Some important differences jump to the eye:

    Let’s pick one app and see how it looks like.

    image

    The first blade, Essentials, presents a quick summary of the main properties of the app. The settings blade, which opens automatically as soon as you select the app, corrals all the app properties in a neat set of categories. There’s even a nice search field that will show you in which bucket you’ll find the property you need.
    Nearly all the old properties are there: the rather large image below shows the mapping between old and new. I recommend you click on the pic to display the full image.

    PortalMapping

    Most notably, the dev features in the new portal do not offer any of the operations that would affect the ServicePrincipal of your app – that is to say, the instance of the app in your own tenant. In the old portal, creating an app meant both creating an Application object (the blueprint of your app) and provisioning that app right away in your own tenant. In the new portal, creating an app means just creating the blueprint, the Application. The user assignment, app role assignments etc are available in the admin portion of the portal – but you’ll be able to use those against your app only if you do provision it in your own tenant after creation.
    If you want to provision your app in your own tenant: you need to run it, attempt signing in with one user from your tenant with the right privileges, and granting consent when prompted. That will lead to the provisioning of the app, that is to say the creation of the ServicePrincipal in your tenant and the assignment of the permissions you consented to (VERY detailed description of the process in this free chapter).

    There are lots of neat features tucked in those options, especially in the ones that have been historically difficult to deal with in the old portal. Let’s take a look at my two favorites: permission management and manifest editing.

    If you go to the Required permissions blade (finally a good name) and click on Add, you’ll find yourself at the beginning of a nice guided experience:

    image

    Clicking on Select an API, I get to a clean list of what’s available – even including a search box.

    image

    Let’s click on the Microsoft Graph and hit Select.

    image

    Now, isn’t that super neat! You get a nice list of permissions, subdivided by application and delegated… and you even get indications on what permissions can only be consented by administrators vs all users! Personally, the colors give me cognitive dissonance: as a developer who isn’t often an admin, the permissions requiring admin consent are the problematic ones. But! The information is there, and that wasn’t the case before.

    The other feature I really like, and I am sure it will be your favorite too, is the inline editing of the manifest.
    Azure AD applications have lots of settings that can’t be accessed via portal – and sometimes, it’s just better to be able cut & paste settings directly. For that purpose, the old portal offered the ability to download the app manifest (a JSON dump of the Application object, really), edit it locally, and re-upload it to apply changes.
    In the new portal, however, you can edit the manifest in place – no need to go through the download-edit-upload cycle! You can access the feature by going back to the Essentials blade and clicking on Edit manifest.

    image

    There’s even some rudimentary auto completion support, which is great for people like myself with non-existing memory for keywords.

    Try it out!

    As diligently reported by the header of each and every blade, this stuff is still in preview. Your input is always super valuable – the right place to provide it in this case is in the ‘Admin Portal’ section of our feedback forum.

    I hope you’ll enjoy this feature as much as I plan to enjoy shedding my old portal complex and finally use portal.azure.com at the next conference… which by the way it’s just 10 days away! See you in Atlanta Smile

    September 14, 2016

    KatasoftSpring Boot WebMVC – Spring Boot Technical Concepts Series, Part 3 [Technorati links]

    September 14, 2016 04:11 PM

    Spring Boot, with Spring Boot WebMVC, make it easy to create MVC apps with very clear delineations and interactions. The Model represents formal underlying data constructs that the View uses to present the user with the look and feel of the application. A Controller is like a traffic cop. It receives incoming requests (traffic) and routes that traffic according to your application’s configuration.

    This is a huge upgrade from the early days of JSP, when it was not uncommon to have one (or a small number of) files that were each dependent on their baked-in logic. It was basically one giant View with internal logic to deal with inputs and session objects. This was a super bad design AND it didn’t scale well. In practice, you ended up with bloated, monolithic template files that have a heavy mix of Java and template code.

    So, how do we use Spring Boot to create web MVC applications? I can’t wait to show you how simple it is! We’ll start with a very simple RESTful API example and then expand the example to use Thymeleaf, a modern templating engine.

    The code used throughout this post can be found here. The examples below use HTTPie, a modern curl replacement.

    Looking for a deeper dive? In the next post of our Spring Boot Technical Series we’ll dig even deeper into Thymeleaf for form validation and advanced model handling.

    Set Up Your pom.xml in Spring Boot WebMVC

    Here’s a snippet from the pom.xml file. The only dependency is spring-boot-starter-web (the parent takes care of all versioning):

    ...
    
    <parent>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-parent</artifactId>
      <version>1.4.0.RELEASE</version>
    </parent>
    
    ...
    
    <dependencies>
    
      ...
    
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
      </dependency>
    
      ...
    
    </dependencies>
    
    ...

    Build a RESTful API in 10 Lines

    Let’s take a look at the simplest Spring Boot MVC “Hello World” application:

    @SpringBootApplication
    @RestController
    public class SpringMVCApplication {
        public static void main(String[] args) {
            SpringApplication.run(SpringMVCApplication.class, args);
        }
    
        @RequestMapping("/")
        public String helloWorld() { return "Hello World!"; }
    }

    To tell the truth, we are cheating just a little bit here. We’ve put the entire application in a single class and it’s responsible for the Controller as well the Spring Boot application itself.

    In this case, there really isn’t a model or a view. We’ll get to that shortly.

    The @SpringBootApplication annotation makes it (not surprisingly) a Spring Boot application. The annotation actually is a shorthand for three other annotations:

  • @Configuration – Tells Spring Boot to look for bean definitions that should be loaded into the application context in this class (we don’t have any).
  • @EnableAutoConfiguration – Automatically loads beans based on configuration and other bean definitions.
  • @ComponentScan – Tells Spring Boot to look for other components (as well as services and configurations) that are in the same package as this application. This makes it easy to setup external Controllers without additional coding or configuration. We’ll see this next.
  • We also get some additional Spring Boot autoconfiguration magic wherein if it finds spring-webmvc on the classpath, we don’t have to explicitly have the @EnableWebMvc annotation.

    So, @SpringBootApplication packs quite a punch!

    The @RestController annotation tells Spring Boot that this class will also function as a controller and will return a particular type of response – a Restful one. This annotation is also multiple annotations bundled up into one:

  • @Controller – Tells Spring Boot that this is a controller component
  • @ResponseBody – Tells Spring Boot to return data, not a view
  • You can fire up this application and then run:

    http localhost:8080

    You’ll see:

    HTTP/1.1 200
    Content-Length: 12
    Content-Type: text/plain;charset=UTF-8
    Date: Tue, 06 Sep 2016 19:48:46 GMT
    
    Hello World!

    Working with Models

    The previous example was really only a Controller. Let’s add in some Models. Once again, Spring Boot WebMVC makes this super easy.

    First, let’s see what’s going in our Controller:

    @RestController
    public class MathController {
    
        @RequestMapping(path = "/maths", method = POST)
        public MathResponse maths(@RequestBody MathRequest req) {
            return compute(req);
        }
    
        private MathResponse compute(MathRequest req) {
          ...
        }
    }

    On line 4, we see that the maths method will only accept POST requests.

    On line 5, we see that the maths method returns a model object of type MathResponse. And, the method expects a parameter of type MathRequest.

    Let’s see what a request produces:

    http -v POST localhost:8080/maths num=5 operation=square
    
    POST /maths HTTP/1.1
    ...
    {
        "num": "5",
        "operation": "square"
    }
    
    HTTP/1.1 200
    ...
    {
        "input": 5,
        "msg": "Operation 'square' is successful.",
        "operation": "square",
        "result": 25,
        "status": "SUCCESS"
    }

    Notice that a JSON object is passed in and a JSON object is returned. There’s some great Sprint Boot magic going on here.

    Jackson for Java to JSON Mapping

    In the old days before Spring Boot and Spring Boot WebMVC (like 2 years ago), you had to manually serialize incoming JSON to Java objects and deserialize Java Objects to be returned as JSON. This was often done with the Jackson JSON mapper library.

    Spring Boot includes Jackson by default and attempts to map JSON to Java Objects (and back) automatically. Now, our controller method signature starts to make more sense:

    public MathResponse maths(@RequestBody MathRequest req)

    Note: Remember from before that using the @RestController annotation automatically ensures that all response are @ResponseBody (that is, data – not a view).

    Here’s our MathRequest model object:

    public class MathRequest {
    
        private int num;
        private String operation;
    
        // getters and setters here
    }

    Pretty straightforward POJO here. Jackson can easily handle taking in the JSON above, creating a MathRequest object and passing it into the maths method.

    Here’s our MathResponse model object:

    @JsonInclude(Include.NON_NULL)
    public class MathResponse {
    
        public enum Status {
            SUCCESS, ERROR
        }
    
        private String msg;
        private Status status;
        private String operation;
        private Integer input;
        private Integer result;
    
        // getters and setters here
    }

    Notice that in this case, we’re using the @JsonInclude(Include.NON_NULL) annotation. This provides a hint to Jackson that says any null value in the model object should be ignored in the response.

    Return A View For the Full MVC Experience

    In the last section, we added Model capabilities to our application to go along with the Controller. To round out our conversation, we will now make use of Views.

    To do this, we’ll start by adding in Thymeleaf as a dependency to our application. Thymeleaf is a modern templating engine that’s very easy to use with Spring Boot.

    We simply replace:

    <artifactId>spring-boot-starter-web</artifactId>

    With:

    <artifactId>spring-boot-starter-thymeleaf</artifactId>

    Let’s take a look at our controller:

    @Controller
    public class MathController {
    
        @Autowired
        MathService mathService;
    
        @RequestMapping(path = "/compute", method = GET)
        public String computeForm() {
            return "compute-form";
        }
    
        @RequestMapping(path = "/compute", method = POST)
        public String computeResult(MathRequest req, Model model) {
            model.addAttribute("mathResponse", mathService.compute(req));
    
            return "compute-result";
        }
    }

    For both methods, computeForm and computeResult, the path is the same: /compute. That’s where the method attribute comes in. computeForm is only for GET requests and computeResult is only for POST requests.

    computeForm simply returns a template called compute-form. Using the default location for templates, we create the file: src/main/resources/templates/compute-form.html. This displays a simple form for input:

    The computeResult method, takes a MathRequest and Model objects as parameters. Spring Boot does it’s magic using Jackson as described before to take the form submission and marshall it into a MathRequest object. And, Spring Boot automatically passes in the Model object. Any attribute added to this model object is available to the template ultimately returned by the method.

    The line: model.addAttribute("mathResponse", mathService.compute(req)); ensures that the resulting MathResponse object added to the model, which is made available to the returned template. In this case, the template is compute-result.html:

    ...
    <div th:if="${mathResponse.status.name()} == 'ERROR'">
        <h1 th:text="'ERROR: ' + ${mathResponse.msg}"/>
    </div>
    <div th:if="${mathResponse.status.name()} == 'SUCCESS'">
        <h1 th:text="${mathResponse.input} + ' ' + ${mathResponse.operation} + 'd is: ' + ${mathResponse.result}"/>
    </div>
    ...

    The above snippet is the Thymeleaf syntax for working with the mathResponse object from the model. If there was an error, we show the message. If the operation was successful, we show the result.

    Now I Know My MVC, Won’t You Sing Along With Me?

    Here’s a partial view of the project structure:

    .
    ├── java
    │   └── com
    │       └── stormpath
    │           └── example
    │               ├── controller
    │               │   ├── MathController.java
    │               │   └── MathRestController.java
    │               └── model
    │                   ├── MathRequest.java
    │                   └── MathResponse.java
    └── resources
        └── templates
            ├── compute-form.html
            └── compute-result.html

    The Models used in the example are MathRequest and MathResponse. The Views are in the templates folder: compute-form.html and compute-result.html. And the Controllers are MathRestController and MathController.

    Having the concerns separated in this way makes for a very clear and easy to follow application.

    In the next installment of the Spring Boot series, we will delve deeper into Thymeleaf templates, including form validation and error messaging.

    Learn More

    Need to catch up on the first two posts from this series, or just can’t wait for the next one? We’ve got you covered:

  • Default Starters — Spring Boot Technical Concepts Series, Part 1
  • Dependency Injection — Spring Boot Technical Concepts Series, Part 2
  • Secure Your Spring Boot WebApp with Apache & LetsEncrypt SSL in 20 Minutes
  • Tutorial: Build a Flexible CRUD App with Spring Boot in 20 Minutes
  • Watch: JWTs in Java for Microservices and CSRF Prevention
  • The post Spring Boot WebMVC – Spring Boot Technical Concepts Series, Part 3 appeared first on Stormpath User Identity API.

    September 13, 2016

    OpenID.netHarmonizing IETF SCIM and OpenID Connect: Enabling OIDC Clients to Use SCIM Services [Technorati links]

    September 13, 2016 07:18 PM

    OpenID Connect(OIDC) 1.0 is a key component of the “Cloud Identity” family of standards. At Oracle, we have been impressed by its ability to support federated identity both for cloud business services and in the enterprise. This is the reason why we recently joined the OpenID Foundation as a Sustaining Corporate Member.

    In addition to OIDC, we are also strong proponents of the IETF SCIM standard. SCIM provides a JSON-based standard representation for users and groups, together with REST APIs for operations over identity objects. The schema for user objects is extensible and includes support for attributes that are commonly used in business services, such as group, role and organization. 

    Federated identity involves two components: secure delivery of user authentication information to a relying party (RP) as well as user profile or attribute information. Many of our customers and developers have asked us: can OIDC clients interact with a SCIM endpoint to obtain or update identity data? In other words, can we combine SCIM and OIDC to solve a traditional use-case supported by LDAP for enterprise applications (bind, attribute lookup) recast for the modern frameworks of REST and cloud services.

    Working collaboratively with other industry leaders, we have published just such a proposal[1]. The draft explains how an OpenID Connect RP can interact with a SCIM endpoint to obtain or update user information. This allows business services to use the standard SCIM representations for users and groups, yet have the information conveyed to the service in a single technology stack based upon the OIDC protocols.

    SAML, OIDC, SCIM and OAuth are the major architectural “pillars” of cloud identity. We would like to see them work together in a uniform and consistent way to solve cloud business service use-cases. Harmonizing SCIM and OIDC is an important step in that direction.

    Prateek Mishra, Oracle

    [1] http://openid.net/specs/openid-connect-scim-profile-1_0.html   

    KatasoftAuthentication with Salesforce, SAML, & Stormpath in 15 Minutes [Technorati links]

    September 13, 2016 04:41 PM

    Salesforce is a popular business software platform with many functions and features – not just a CRM For B2B applications. Allowing users to log in with their Salesforce credentials is necessary functionality, but working with SAML is often a developer’s least favorite task. That’s where Single Sign-On with the Stormpath Java SDK and Spring Boot integration come in.

    In this tutorial, I’ll walk you through how simple it is to configure SAML single sign-on with Stormpath and connect it to Salesforce.

    Setup Salesforce to Connect to Stormpath

    To begin, we have to enable SAML on both the Stormpath and Salesforce sides and then connect the two. We do this via the Salesforce front-end and the Stormpath Admin Console screens. To connect Salesforce to our Stormpath tenant we need to modify three parts of the global settings from Salesforce — the Identity Provider, Single Sign-On, and the Connected App.

    All of these settings can be found under Setup Home when clicking on the gear icon on the top-right.

    Identity Provider

    SAML breaks authentication into three parts – the User, the Service Provider, and the Identity Provider. The identity provider provides access to the service. The most common identity providers are Facebook and Google. You probably have seen the ‘Login with Google’ buttons on various sign-in pages.

    We need to set our Salesforce instance up as an Identity Provider. The screen for this is under Settings > Identity > Identity Provider.

    Just click on Enable Identity Provider. Then click Save and download both the Certificate and Metadata (which we will use in a moment).

    Single Sign-On

    The term Single Sign-On (SSO) encapsulates what SAML allows — users accessing various sites and resources with one credential. We enable this on Salesforce by going to Settings > Identity > Single Sign-On Settings. Click Edit, check ‘SAML Enabled’, and then click Save. Finally, click ‘New from Metadata File’, select the metadata we just downloaded and click Create. Don’t worry about filling in details.

    Connected App

    The last part of our three-part Salesforce configuration is Apps. Apps are how Salesforce enables functionality. Go to Platform Tools > Apps > Apps. Scroll down to the Connected Apps section and click New. Type in a name and email (anything will do), scroll down to the Web App Settings and check Enable SAML. Type anything you like into the Entity ID (like ‘changeme’) and ACS URL (like ‘http://example.com’), we’ll be filling these in with details from Stormpath shortly, then set the Name ID Format to emailAddress, and click Save.
    Salesforce connected apps

    Click on Manage and make a note of the SP-Initiated Redirect Endpoint. We’ll be using these details in our Stormpath configuration.

    Setup Your SAML Integration in Stormpath

    The second half of our setup tasks happen in your Stormpath Admin Console. Primarily this involves three things — creating a SAML Directory, linking your Application, and configuring Mapping Attributes.

    Create a SAML Directory

    In the Directories tab, click on Create Directory, select SAML from the Directory Type, and give it a name. Enter in the endpoint we just mentioned into both URL fields (Login/Logout) and copy the contents of the certificate we downloaded into the Cert box. Make sure the Algorithm is RSA-SHA256 and click the create button. Your new directory should be shown in the directories list.

    Stormpath SAML Admin Console

    Link Your Application

    Before we move on to the Stormpath Application, we need to link the directory we just created to our Salesforce Application. We’ll use fields Entity ID and ACS URL. For each, enter the directory HREF (you can see it on click) and the Assertion Consumer Service URL (seen in the Identity Provider tab, and the bottom of the directory page), respectively. Just click on Edit, change the fields, and click Save.
    Salesforce WebApp Settings

    Configure Your Account Store

    Now we need to set up the application you link to when authenticating via Stormpath. Open up the application you intend to use via the Applications tab. Make sure the Authorized Callback URIs contains the URL of your user interface. (If you are running the app locally, the callback should be http://localhost:8080/stormpathCallback).

    Click on the Account Stores navigation button and then Add Account Store. You should be in the Directories tab from which you can select the directory we created above. Click Create Mappings. A mapping should appear in the list of stores for your application.
    SAML Account Store Config

    Booting with Spring Boot

    To determine if our initial setup has been successful, we need an application that is linked to Stormpath. We have a sample setup here for this tutorial. You will need to update the application.properties file in src/main/resources to point to your application and use the right keys.

    stormpath.application.href = https://api.stormpath.com/v1/applications/5ikoEqLaKz1Rocw2QuRjpM
    stormpath.apiKey.id = <your api key>
    stormpath.apiKey.secret = <your api secret>

    Note: In production, you shouldn’t put your application href and keys into application.properties. It’s better to use environment variables instead of baking this into code.

    You should now be able to boot up directly using Maven.

    mvn spring-boot:run

    Browsing to localhost:8080 should show you a simple homepage.

    Local Host -- Salesforce / Spring Boot WebApp

    Clicking on the Restricted button will show the login screen which now has a Salesforce login button.
    Stormpath Login Screen with Salesforce

    Clicking on the Salesforce button should take you to a Salesforce login page.

    Salesforce Login

    Once you log in, you will be taken back to the Spring Boot Application page, but now with a hello message displayed.

    Restricted View

    The reason we’re seeing NOT_PROVIDED is because we haven’t set up our attribute mappings.

    Configure Attribute Mappings

    So far all we’ve set up is how we identify the user, and that’s via username. (We set it using the Name ID Format in Salesforce when we created our application). However, if we look at the template used to generate our logged-in homepage we can see it uses the fullName on the account, which we haven’t mapped yet.

    <h1 th:if="${account}" th:inline="text">Hello, [[${account.fullName}]]!</h1>

    In Stormpath the account fullname is built from the given and last names. See this explanation from the Stormpath documentation to learn more about account fields.

    For now, we need to map those values onto the SAML data from Salesforce, and then from the SAML data to the relevant Stormpath values.

    From Salesforce

    Inside of your application, at the bottom, is a section called Custom Attributes.

    Salesforce Custom Attributes

    Click on the New button. This will bring up a dialogue with Key and Value fields. Inside Key put ‘firstname’. Then click on Insert Field, click on $User > and then First Name, and then click Insert. This will put the correct string into the Value field which is the user’s first name. Click Save.

    Do this again for the user’s last name and you should have two custom attributes defined.
    Salesforce Custom Attributes

    To Stormpath

    In the Stormpath Admin click the Directories tab, select the directory we created above and scroll down to the the Attribute Mappings tab. When you click into that tab you should see three columns – Attribute Name, Attribute Name Format, and Stormpath Field Names. For the first column put in firstname and for the last put in givenName (the middle field is optional). Then for another row put in lastname and surname, respectively.
    Stormpath SAML Admin Console

    Click save!

    Restart Your Application = Success!

    Now if we restart our local application and login again, we should see the user’s (in this case my) first and last name pulled in from Salesforce.
    Salesforce SAML Login Screen

    Learn More

    As you’ve hopefully seen from this tutorial, setting up single sign-on with Stormpath and Salesforce makes working with SAML a breeze! To learn more about authentication with Stormpath, or our SAML integration, check out these resources:

  • Watch: No-Code SAML Support for SaaS Applications
  • Build a No-Database Spring Boot Application with Stormpath Custom Data
  • Add Google Login to Your Java Single Sign-On Setup
  • The post Authentication with Salesforce, SAML, & Stormpath in 15 Minutes appeared first on Stormpath User Identity API.

    September 12, 2016

    KatasoftSecure Your Spring Boot WebApp with Apache and LetsEncrypt SSL in 20 Minutes [Technorati links]

    September 12, 2016 06:34 PM

    letsencrypt-logoSpring Boot can run as a standalone server, but putting it behind an Apache web server has several advantages, such as load balancing and cluster management. Now with LetsEncrypt it’s easier than ever (and free) to secure your site with SSL.

    In this tutorial, we’ll secure an Apache server with SSL and forward requests to a Spring Boot app running on the same machine. (And once you’re done you can add Stormpath’s Spring Boot integration for robust, secure identity
    management that sets up in minutes.)

    Set Up Your Spring Boot Application

    The most basic Spring Boot webapp just shows a homepage. Using Maven, this has four files: pom.xml, Application.java, RequestController.java, and home.html.

    The pom.xml file (in the root folder) declares four things: application details, starter parent, starter web dependency, and the Maven plugin (for convenience in running from the console).

    <project>
    
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.stormpath.sample</groupId>
        <artifactId>basic-web</artifactId>
        <version>0.1.0</version>
    
        <parent>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>1.4.0.RELEASE</version>
        </parent>
    
        <dependencies>
           <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-web</artifactId>
            </dependency>
        </dependencies>
    
        <build>
            <plugins>
                <plugin>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    
    </project>

    Application.java (src/main/Java/com/stormpath/tutorial) simply declares the app part of Spring Boot.

    @SpringBootApplication
    public class Application  {
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
        }
    }

    RequestController.java (src/main/java/com/stormpath/tutorial) maps all requests to the homepage.

    @Controller
    public class RequestController {
    
        @RequestMapping("/")
        String home() {
            return "home.html";
        }
    }

    Finally home.html (src/main/resources/static) is just declares a title and message.

    <!DOCTYPE html>
    <html>
    <head><title>My App</title></head>
    <body><h1>Hello there</h1></body>
    </html>

    Note: you can clone this basic project from the GitHub repo.

    Next, run:

    mvn spring-boot:run

    You should see the page when browsing to localhost:8080.

    Launch your Spring Boot webapp

    Launch Apache

    Next, we need to fire up Apache. I created an Ubuntu instance on EC2 (check out the AWS Documentation for a getting started guide). I then logged in and installed Apache with the following:

    sudo apt-get install apache2

    This should install and start an Apache server running on port 80. After adding HTTP to the instance inbound security group (again here, the AWS Documentation contains a guide) you should be able to browse to the public DNS.
    Apache Ubuntu default page

    Add LetsEncrypt

    LetsEncrypt has policies against generating certificates for certain domains. amazonaws.com is one of them (because they are normally transient). You need to add a CNAME to a personal domain that points to the instance you created. Here I’m using kewp.net.za.

    The following commands should install SSL certificates to your domain automatically.

    sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
    cd /opt/letsencrypt
    ./letsencrypt-auto --apache -d kewp.net.za

    Browsing to your personal domain should now bring up the Apache homepage with SSL.
    Apache with SSL from LetsEncrypt

    Note: Chrome didn’t like the security of my page (I didn’t get the green icon) because the standard Ubuntu front-end returns unencrypted (http) contents.

    Build a Connector for Spring Boot

    We have to tell Spring Boot to make a connector using AJP, a proxy protocol that connects Apache to Tomcat. To do this dd the following to the bottom of the class in Application.java.

    @Bean
        public EmbeddedServletContainerFactory servletContainer() {
    
            TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory();
    
            Connector ajpConnector = new Connector("AJP/1.3");
            ajpConnector.setProtocol("AJP/1.3");
            ajpConnector.setPort(9090);
            ajpConnector.setSecure(false);
            ajpConnector.setAllowTrace(false);
            ajpConnector.setScheme("http");
            tomcat.addAdditionalTomcatConnectors(ajpConnector);
    
            return tomcat;
        }

    We’re setting the AJP port to 9090 manually. You might want to add a variable to application.properties and pull it in with @Value to make it more configurable.

    Restart the app as above and you should see messages that Tomcat is now listening on both port 8080 and 9090. Note: the Github repository above has the connector code included so you can just use that from the start.

    Run the Application on Your Instance

    In the screenshots above I’ve been running the web app on my local Windows machine for testing. To get it to run on your instance just do the following.

    git clone https://github.com/stormpath/apache-ssl-tutorial
    cd apache-ssl-tutorial
    mvn spring-boot:run

    Reroute Apache

    Now we tell Apache to pass all traffic to our application. We can use the proxy and proxy_ajp modules for that. But first, we need to enable them.

    sudo a2enmod proxy
    sudo a2enmod proxy_ajp

    Now we need to update the virtual host on port 443 to use the connector we created. For me the relevant file was in /etc/apache2/sites-available/000-default-le-ssl.conf. Add the follow to the bottom of the <VirtualHost *.443> element.

    ProxyPass / ajp://localhost:9090/
    ProxyPassReverse / ajp://localhost:9090/

    And at last, restart the server.

    sudo service apache2 restart

    Add Another Security Group

    Now we need to ensure that EC2 is going to allow traffic to HTTPS. Add HTTPS to the inbound security group as before.

    Fire Up Your New (Secure) Spring Boot Application!

    Now when you browse to your domain, you should see our Spring Boot web app, secured behind SSL!

    Spring Boot webapp with SSL

    Configure SSL Between Apache and Tomcat

    One last thing. The traffic between Apache and Tomcat is currently unencrypted (HTTP). This can be a problem for some apps (like Stormpath – which requires a secure connection). To fix this, we use something called Tomcat’s RemoteIpValve. Enable this by adding the following to your application.properties.

    server.tomcat.remote_ip_header=x-forwarded-for
    server.tomcat.protocol_header=x-forwarded-proto

    Apache will set these headers by default and then Tomcat (embedded in Spring Boot) will properly identify the incoming traffic as SSL.

    Add Authentication

    Application security is intrinsic to what we do here at Stormpath. Our team of Java security experts have just released the 1.0 version of our Java SDK, and with it massive updates to our Spring and Spring Boot integrations. You can add authentication for secure user management in this or any Spring Boot application in just 15 minutes! Check out our Spring Boot Quickstart to learn how!

    Spring Boot webapp with Apache and LetsEncrypt SSL

    The post Secure Your Spring Boot WebApp with Apache and LetsEncrypt SSL in 20 Minutes appeared first on Stormpath User Identity API.

    Ludovic Poitou - ForgeRockOpenDJ: Monitoring Unindexed Searches… [Technorati links]

    September 12, 2016 01:41 PM

    FR_plogo_org_FC_openDJ-300x86OpenDJ, the open source LDAP directory services, makes use of indexes to optimise search queries. When a search query doesn’t match any index, the server will cursor through the whole database to return the entries, if any, that match the search filter. These unindexed queries can require a lot of resources : I/Os, CPU… In order to reduce the resource consumption, OpenDJ rejects unindexed queries by default, except for the Root DNs (i.e. for cn=Directory Manager).

    In previous articles, I’ve talked about privileges for administratives accounts, and also about Analyzing Search Filters and Indexes.

    Today, I’m going to show you how to monitor for unindexed searches by keeping a dedicated log file, using the traditional access logger and filtering criteria.

    First, we’re going to create a new access logger, named “Searches” that will write its messages under “logs/search”.

    dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
        create-log-publisher \
        --set enabled:true \
        --set log-file:logs/search \
        --set filtering-policy:inclusive \
        --set log-format:combined \
        --type file-based-access \
        --publisher-name Searches

    Then we’re defining a Filtering Criteria, that will restrict what is being logged in that file: Let’s log only “search” operations, that are marked as “unindexed” and take more than “5000” milliseconds.

    dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
        create-access-log-filtering-criteria \
        --publisher-name Searches \
        --set log-record-type:search \
        --set search-response-is-indexed:false \
        --set response-etime-greater-than:5000 \
        --type generic \
        --criteria-name Expensive\ Searches

    Voila! Now, whenever a search request is unindexed and take more than 5 seconds, the server will log the request to logs/search (in a single line) as below :

    $ tail logs/search
    [12/Sep/2016:14:25:31 +0200] SEARCH conn=10 op=1 msgID=2 base="dc=example,
    dc=com" scope=sub filter="(objectclass=*)" attrs="+,*" result=0 nentries=
    10003 unindexed etime=6542

    This file can be monitored and used to trigger alerts to administrators, or simply used to collect and analyse the filters that result into unindexed requests, in order to better tune the OpenDJ indexes.

    Note that sometimes, it is a good option to leave some requests unindexed (the cost of indexing them outweighs the benefits of the index). If these requests are unfrequent, run by specific administrators for reporting reasons, and if the results are expecting to contain a lot of entries. If so, a best practice is to have a dedicated replica for administration and run these expensive requests. Also, it is better if the client applications are tuned to expect these requests to take a long time.


    Filed under: Directory Services Tagged: directory-server, ForgeRock, index, ldap, opendj, opensource, performance, search, Tips, tuning

    GluuIs Google getting ready to buy Okta? [Technorati links]

    September 12, 2016 10:30 AM

    Be skeptical of “definitions” that collide with marketing imperatives.

    There’s been news recently related to a tightening relationship between Google and Okta. Here’s a quote from a recent ZD Net article:

    “Together, [Okta and Google] will provide a multi-cloud reference architecture. As customers transition to a multi-cloud environment, they’ll be able to use the Okta Identity Cloud to connect to legacy, on-premises technology. Okta and Google are also working together to equip global systems integrators, Google resellers and independent software vendors with training and tools to accelerate the move to the cloud. Additionally, Google has branded Okta as one of its “preferred identity partners” for Google Apps deployments in the enterprise.”

    This is all well and fine. Providing organizations easy access to identity security technology is a good thing. However, a Google acquisition or partnership can have interesting consequences for other vendors in the space.

    Anytime Google picks favorites there is the potential that they might leverage their near-monopolistic position in search to further their agenda. And with regard to the Okta-Google partnership, we’re already seeing a ripple effect in Okta’s positioning among Google search rankings. For example, check out this Definition for the techincal term “inbound saml”:

    inbound saml11   Google Search

    Inbound SAML is industry jargon for a specific use case of the SAML 2.0 open standard, which is by definition vendor neutral. This is a bad definition…

    Before going further, let me provide some context as to why this search is important. The “Inbound SAML” requirement drives revenue, not expense. A company that searches for this term is a valuable prospect. Companies invest in infrastructure rarely, and only when they are forced to do so. The ROI for infrastructure is difficult to calculate. However, the ROI for infrastucture that drives revenue is much more compelling.

    Inbound SAML enables an organization to offer SAML authentication as a front door to their digital service. It’s a common requirement for SaaS providers, who want to make sure they can support the authentication requirements of large enterprise customers. If you have this requirement, you normally don’t wait to do something about it. Frequently there is a valuable customer that needs service soon.

    So Google is giving Okta valuable free advertising for a vendor neutral search term. It’s unfair to websites that actually provide a real definition (organic) and to organizations that pay to advertise for this search term. In fact, you can see in the screenshot that an ad from another vendor, Ping Identity, is completely undermined by the full snippet of Okta’s documentation that is being displayed like a definition.

    To many people (myself included!), results displayed like this carry extra weight. Here are some other searches where Google displays its definition-style results:

    The NL West standings:

    nl west standings   Google Search

    Collusion:

    collusion   Google Search

    How to make a hard boiled egg:

    how to make a hard boiled egg   Google Search

    This type of display is typically reserved for searches that have straightforward and factual results–not vendor promotion. Google’s definition for “inbound saml” is at the very least misleading. It erodes our trust in Google, and undermines the integrity of their platform.

    Google controls much of what we see on the Internet so it is difficult to have an accurate understanding of how search results are manipulated to favor their products and the products and services of their partners. But as a vendor in a space where Google now seems to have a horse in the race, this type of preferential treatment is troubling.

    It also raises the question of why Google is being such a good friend. After the recent acquisition of Apigee, it makes one wonder. Is this a sign that Okta is the next target?

    September 09, 2016

    Paul TrevithickE-commerce and Same-Day Delivery Services [Technorati links]

    September 09, 2016 04:55 PM

    With e-commerce booming, continuing a trend that has now been established long enough for the biggest players to really master the home delivery business model, competition is higher than ever. Getting their goods out there in time to compete with the likes of Amazon and the big retailers can be a major stumbling block for companies across the country, even if all the other pieces are in place. Opting for cheap man and van hire in London for next day or same day courier services may be a suitable answer for many companies, since the UK’s capital is big enough to support healthy competition while not requiring businesses to search too far afield for enough customers to keep them afloat. In fact, in many ways the smaller e-commerce ventures actually have an advantage when it comes to same day delivery.

    Logistics can often be forgotten in the race to build the most impressive online presence, but of course this is a vital aspect of running any online sales business whether you’re delivering food, goods or pretty much anything. Many brands focus too much on image and differentiating themselves, and it all sounds wonderful in theory. Unfortunately, failure to deliver results both literally and figuratively can spell the end for many start-ups. In fact, same day delivery services in particular have been a tough one to crack even for large corporations, so how can small ventures in a city like London do a better job with local same-day courier services?

    For one thing, many companies both large and small have failed because the margins are simply not there on the products they’re trying to sell and deliver on the same day. You need to crack that before you stand a chance of success, and part of this problem inevitably involves scale. Having a reliable business model that works becomes crucial here, because it allows you to hook customers into a subscription program. This means you’re able to cover your costs by charging people in advance for the luxury of same-day deliveries, and the customer is more likely to buy more items from you to make the most of their investment. Building trust can unlock this hidden potential.

    On the face of it, small businesses aren’t going to benefit from the advantage of making hundreds of deliveries in a single round trip. However, there are ways to cut overheads – for example, running a business that picks up and delivers items without the need to store them in a warehouse in between. If your business is basically the front for a local cheap man and van hire service you’re employing, but you’re handling the customer’s needs and catering to them, there’s an opportunity there for big profits, especially if you can establish the trust we mentioned.

    At the moment there’s a particular focus among the bigger delivery companies on same-day deliveries because it’s something that hasn’t worked fantastically for anyone yet, big or small. It will be interesting to see over the next couple of years who really manages to crack this market and master the art of turning e-commerce into a truly convenient consumer solution.

    The post E-commerce and Same-Day Delivery Services appeared first on Incontexblog.org.

    Mike Jones - Microsoft“amr” Values specification addressing WGLC comments [Technorati links]

    September 09, 2016 04:52 PM

    OAuth logoDraft -02 of the Authentication Method Reference Values specification addresses the Working Group Last Call (WGLC) comments received. It adds an example to the multiple-channel authentication description and moves the “amr” definition into the introduction. No normative changes were made.

    The specification is available at:
    • http://tools.ietf.org/html/draft-ietf-oauth-amr-values-02

    An HTML-formatted version is also available at:
    • http://self-issued.info/docs/draft-ietf-oauth-amr-values-02.html

    The specification is available at:

    An HTML-formatted version is also available at:

    Gerry Beuchelt - MITRELinks for 2016-09-08 [del.icio.us] [Technorati links]

    September 09, 2016 07:00 AM

    Matthew Gertner - AllPeersSkills You Need for a Career in Big Data [Technorati links]

    September 09, 2016 02:43 AM
    What skills do you need for a career in big data?Photo by CC user Kayaker~commonswiki on Wikimedia Commons. Image originally made by DARPA (public domain)

    Big data is one of the most significantly growing areas of all business in general, and it’s also proving to a lucrative career area for many people interested in technology and entering a field with a high growth level. According to Forbes, the salary for technical professionals with big data expertise and related skills is $124,000. Additionally, there is virtually no limit to the industries where big data professionals are in high demand. Also according to Forbes, the top five industries where there is the most significant demand for talent with data-related skills include Professional, Scientific and Technical Services, IT, Manufacturing, Finance and Insurance, and Retail Trade.

    So if you’re thinking of entering the business world as a data-related specialist, what are the skills hiring managers most often want to see?

    Problem-Solving Abilities

    While many of the skills you’ll need to have as someone who deals with data involve technical and statistical abilities, there are also soft skills required. Many businesses hiring big data professionals want someone who not just understands the numbers and the technology, but also who is a strong problem solver. Big data professionals’ role in today’s business world is often to serve as someone who takes a problem and then creates a measurable solution, so it’s important to be creative and willing to look outside the box in their thinking. In terms of creativity, it’s necessary to have a sense of curiosity, and the willingness and desire to explore new ways of doing things and create your own solutions.

    Hadoop

    In terms of technical skills and training, Hadoop training is undoubtedly one of the most important things you can have if you want a career involving data. Hadoop is a powerful big data platform, and it can also be tricky, which is why so many different types of companies are looking for people with broad proficiency in the platform. Hadoop training should cover the platform’s framework, and also offer both conceptual and hands-on experience. Many of the best certification and training programs will also include realistic projects to get an understanding of what it’s really like to work in a business using the platform.

    Data Visualization

    Another area of proficiency you should have if you’re pursuing a career in big data or even just technology? Data visualization skills. You’ll likely be expected to take massive amounts of information and transform them into visual elements that will provide both technical and non-technical audiences with an understanding of the insights you’re presenting.

    Programming Languages

    Finally, as well as learning the specifics of big data, it can also be helpful and make you seem more appealing to businesses who want to hire, if you know some general programming languages, such as Java or Python. This might not be an absolute requirement, but it’s a good way to set yourself apart in the competitive business world, particularly when you’re facing other candidates who have extensive big data experience. You might need something that’s distinctive to get the job, and having some knowledge of programming languages can be that distinction.

    The post Skills You Need for a Career in Big Data appeared first on All Peers.

    Matthew Gertner - AllPeersQuality Control is Important at Every Step of the Process [Technorati links]

    September 09, 2016 02:28 AM

    An important part of maintaining full quality management is ensuring the ingredients in your products meet your strict standards. It’s easy enough when you’re completely responsible for the sourcing of every single component, but few organizations can claim this autonomy. Most partner up with a chemical or additive supplier to help them find ingredients or develop entire formulas for their products. Therefore, it becomes a professional imperative to align yourself with a company that provides superior ingredients that reflect not just current marketing trends and safety regulations but your principles, too.

    The question of ingredients arises when you’re looking to update your formula to better reflect consumers’ needs. All-natural, preservative-free, environmentally friendly, and organic products are just some of the growing trends affecting the buying habits of the average North American consumer. As you develop your formula to incorporate these concerns, it’s important that you partner with a chemical supplier that can offer advice as technical chemists and process developers. Cambrian is a top chemical manufacturing company that shares their extensive market and product knowledge in order to provide innovative chemical solutions for your growing needs.

    Sourced from a global supply network, these solutions involve ingredients that will always meet your needs in regards to performance and quality control. They also have the ability to surpass them, as superior chemical distributors like Cambrian unite your North American industry with international ingredient manufacturers. By broadening your sources beyond your market, you don’t just get a trusted source – you also tap into an international market of information regarding chemical and additive regulations. Regardless of the industry in which your business is involved – whether you’re in food processing, pharmaceuticals, or something else entirely – the right chemical supplier can make the latest trends in ingredients and development a reality for your company.

    Quality control is a fundamental part of your company. Its management is a way for you to streamline your business while also ensuring your products deliver on customer satisfaction. While there are many factors involved in maintaining these standards, perhaps the most important is guaranteeing you start with best the ingredients for your products. Their properties should be considered thoroughly before you adopt them, and there’s no better way to vet their inclusion than by teaming up with an experienced chemical distributor. They’re committed to sourcing ingredients that reflect your (and your consumers’) priorities, so you can offer the best quality goods. When it comes to make decisions regarding where you source your ingredients, be sure to find a company you can trust to know their stuff and have the latest technology to back up their efforts.

    The post Quality Control is Important at Every Step of the Process appeared first on All Peers.

    September 08, 2016

    KatasoftTutorial: Setting Up An Awesome Git/CLI Environment on Windows [Technorati links]

    September 08, 2016 09:11 PM

    CLIs, or Command Line Interfaces, are extremely powerful when it comes to accessing the basic functions of your computer. In fact, there are some things you can only do from the command line, even on Windows. Beyond that, many programs just work better on the command line. Take Git, the most widely used modern version control system in the world today; Git was designed exclusively for the command line, and it is the only place you can run every available Git command. (Most GUIs only implement some subset of Git functionality for simplicity.)

    In this tutorial, we will learn how to setup a Git/CLI environment on Windows.

    Install Git on Windows

    Visit the Git website and download the latest Git for Windows Installer (At the time of writing this article the latest version is 2.9.3.) This installer includes a command line version of Git as well as the GUI.

    Once you started the installation, you will see an easy Setup Wizard where the only instruction you need to follow is to select Next and Finish buttons to complete the installation. There is no need to change the default options, the only thing I would like to highlight is that the default terminal emulator is MinTTY instead of Windows console. This is because the Windows console has some limitations. We’ll learn more about these limitations as we walk through the rest of this tutorial.

    Now you are ready to start using git and run your first commands! Open Git Bash and type the following command to verify your installation:

    $ git --version

    Then enter git --help to see all the available commands.

    Congratulations! You’ve just run your first git commands!

    Using Git with PowerShell

    Using Git in PowerShell
    Thanks to our previous git installation, the git binaries path should be already set in your PATH environment variables. To check that git is available, open PowerShell and type git. If you get information related to git usage, git is ready.

    If PowerShell doesn’t recognize the command, you’ll need to set your git binary and cmd path in your environment variables. Go to Control Panel > System > Advanced system settings and select Environment Variables.

    In System Variables, find PATH and add a new entry pointing to your git binaries and cmd, in my case I have them in C:\Program Files\Git\bin and C:\Program Files\Git\cmd.

    Beautify your PowerShell

    If you click on ‘Properties’ right after clicking the small PowerShell icon in the top left corner, you will find several visual features to customize your console just the way you want.

    In ‘Edit Options’ make sure to have ‘QuickEdit Mode’ checked. This feature will allow you to select text from anywhere in PowerShell and copy the selected text with a right-click, and paste it with another right-click.

    You can explore the different tabs, select your preferred font and font size, and even set the opacity to make your console transparent if you are using PowerShell 5.

    Now that you have a nice console with much-needed copy/paste functionality, you need something else to enhance your experience as a git user: you need Posh-Git.

    Posh-Git is a package that provides powerful tab-completion facilities, as well as an enhanced prompt to help you stay on top of your repository status (file additions, modifications, and deletions).

    Posh-git Installation

    To install Posh-git let’s use what we have learned so far about git and PowerShell. Start by creating a folder ‘source’ using the mkdir command:

    PS C:\> mkdir source

    Change your working directory to ‘source’ and type clone command:

    PS C:\> cd source
    PS C:\source> git clone https://github.com/dahlbyk/posh-git.git

    Verify that you are allowed to execute scripts in PowerShell by typing ‘Get-ExecutionPolicy’. The result should be RemoteSigned or Unrestricted. If you get a restricted result, run PowerShell as administrator and type this command:

    PS C:\source> Set-ExecutionPolicy RemoteSigned -Scope CurrentUser -Confirm

    Change your working directory to Posh-git and run the install command:

    PS C:\source\> cd posh-git
    PS C:\source\posh-git> .\install.ps1

    Reload your profile for the changes to take effect:

    PS C:\source\posh-git> $PROFILE

    And you’re done!

    You can make changes to files in your repository and explore Posh-git by typing git status.

    Setup Your SSH Keys

    Usually, you would use HTTPS protocol to communicate with the remote Git repository where you are pushing your code. This means that you must supply your credentials (username and password) every time you interact with the server.

    If you want to avoid typing your credentials all the time, you can use SSH to communicate with the server instead. SSH stands for Secure Shell. It is a network protocol that ensures that the communication between the client and the server is secure by encrypting its contents.

    SSH is based on public-key cryptography, so in order to authenticate via SSH to the Git repository you need to have a pair of keys: one public (which will reside on the server) and one private (the one you and only you will use to authenticate to the server). When you supply your private key to the server, it will verify it matches the installed public key, and if it does, then you will be authenticated.

    To access your Git repositories you will need to create and install SSH keys. You can do this with OpenSSH which already comes installed with Git. To generate your key pair open Git Bash and enter the following command:

    $ ssh-keygen -t rsa -b 4096

    This will generate a key pair using RSA as the key type and will use 4096 bits for it. It will then prompt you to enter a location to save the key. If you press Enter, it will be saved in the default location.

    Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]

    Then you will be prompted to enter a passphrase. Type a secure one:

    Enter passphrase (empty for no passphrase): [Type a passphrase]
    Enter same passphrase again: [Type passphrase again]

    And that’s it! You have just created an SSH key pair. Easy, isn’t it? Now you have to add your SSH key to the ssh-agent. First of all, let’s make sure ssh-agent is installed by typing:

    $ eval "$(ssh-agent -s)" [Press enter]
    > Agent pid 8692

    Then, add your SSH key to the ssh-agent:

    ssh-add ~/.ssh/<your_private_key_file_name>

    You now have your private key installed on your computer, but you need to set the public key on the Git remote repository. This step depends on which Git hosting service you are using. There are tutorials available for both GitHub and BitBucket, the two most popular services.

    Use Console Emulators for Improved CLI Experience

    You may be asking yourself “Why should I use a console emulator on Windows instead of the native cmd?” The answer is simple: Console emulators let you choose which shell to run on them and provide you with a variety of configuration options, both for utility and aesthetics.

    Most emulators also support multiple tabs. On each tab, you can run a different shell, or, if you work with multiple git repositories, you could configure multiple tabs pointing to your different working directories. There are emulators that can save the state of each, so when you open up your emulator again they will be there just as you left them.

    Also, if you want to improve your productivity you can configure some hot-keys to speed up repetitive tasks or even use some very useful commands like cat or grep. There are several alternatives that offer lots of functionality and integrate very well with Windows. Let’s review some of them:

    ConEmu

    ConEmu allows you to run “console” applications such as cmd.exe, powershell.exe, Far Manager, bash, etc., and “graphical” applications like Notepad, PuTTY, KiTTY, GVim, Mintty and so on. Given that it is not a shell, it does not provide some standard shell features like remote access, tab-completion or command history.

    You can pre-configure tabs, give them custom names as well as Shell scripts to run when they open, plus additional configuration options; nearly everything about ConEmu can be customized.

    Also, you can search all the text that has been printed or entered in the console history, resize the main window as much as you want, and check the progress of an operation with a quick glance at the taskbar, without bringing the app to the foreground.

    Installation is super easy: just unpack or install to any folder and run ConEmu.exe.
    ConEmu

    Cmder

    Cmder is an improved version of ConEmu. It combines all ConEmu features with cmd enhancements from clink (such as bash-style completion in cmd.exe and PowerTab in powershell.exe) and Git support from msysgit. Current branch name is shown on the prompt. This feature is built-in, so you don’t need to install any extension like we did for PowerShell.

    With Cmder you can run basic unix commands like grep. Also, you can define aliases in a text file for common tasks or use the built-in aliases like .e which opens an Explorer window at your current location. Installation is easy: choose and download your preferred Cmder version (mini or full), unzip files and run Cmder.exe.

    cmder

    Console2

    With Console2 you can not only create as many tabs as you want but also name them individually based on what is running on each. Also, you can assign a Shell script to each tab, automated to run on open.

    You can even customize the keyboard shortcuts (like changing Open New Tab to Ctrl+T) and the appearance (like font, colors, and size).

    Console2 does have some drawbacks. The first time I tried to configure a new tab, following the normal flow (settings > tabs > shell : git shell path), in order to point to git shell, the tab was opened in a separated window, outside of the Console2 context. It took me a while to find out how to configure Console2 to open the git shell as a new tab inside of its context. If you need a hand with this, you should check this link.

    Also, it lacks the functionality to allow multiple tabs to automatically run predefined scripts. Instead, you have to open everything manually every time you start the application.
    Console2

    ConsoleZ

    As a fork of Console2, ConsoleZ should look quite familiar, and it will recognize all of your Console2 custom settings. If you are already using Console2, you should give it a try.

    Besides the Console2 features there are many more options in nearly all the Settings panels, like splitting Tabs into views (horizontally and vertically), settable opacity of text background color, snippets and zooming.

    As with Console2, ConsoleZ is not able to open pre-created tabs on startup.
    ConsoleZ

    PowerCmd

    PowerCnd offers similar features to the others listed above, but it has other cool features, such as auto-log (which prevents you losing your work by saving the output of your consoles automatically), AutoCompletion for files under current directory and Bookmarks with the ability to move between them easily.

    Also, you can save and restore your command line sessions from last time. This emulator isn’t free, but does offer a free trial so you can take it for a test drive.
    PowerCMD

    Go Forth and Experiment

    You have so many options and enough information to start with the command line. There are no excuses for not starting to play with it using a good emulator! So, choose your favorite one, clone your repository and continue running your git commands on the CLI.

    Interested in learning more about git commands and CLI tools? Check out these resources:

  • Building Simple Command Line Interfaces in Python
  • Set Up a Smoking Git Shell on Windows
  • Git in Powershell
  • The post Tutorial: Setting Up An Awesome Git/CLI Environment on Windows appeared first on Stormpath User Identity API.

    Matthew Gertner - AllPeersHow Can Board Portals Save You Time and Money? [Technorati links]

    September 08, 2016 07:53 PM

    How Can Board Portals Save You Time and Money?Photo by CC user kaboompics on Pixabay

    Meetings are a pain. You thought that it would be easy, that it would stick to your schedule. But someone called you up and said that the place you booked two days ago just closed down.

    You tried to reschedule, you calmly told yourself that it’s just a minor setback. On your second attempt, you realized that the attendees weren’t properly informed about the meeting so you ended up explaining the agenda of proceeding with the meeting.

    Those are just two of the many scenarios that may happen when planning a meeting – not to mention all the problems that may happen during a meeting.

    Entrepreneur.com stated that in America, unproductive meetings cost $37 billion a year. With that amount of money, companies could’ve invested in new ventures, paid higher wages or even used it to help others. It’s tragic to see all these resources and opportunities go down the drain and it’s all because of poorly managed meetings.

    Luckily, board portals are here to change the paradigm. Board portals are meeting management apps; they are built to make meetings better and faster. How can board portals save you time and money? Read on below…

    Less is more

    A normal meeting requires a venue, documents and presentation tools. These are all expensive resources. However, board portals can easily do all those things at a fraction of the cost.

    Board portals are digital meeting rooms that provide attendees all the tools they need to properly conduct a meeting.

    No one gets left behind

    Punctuality and attendance are big factors in time extensions. If one of the attendees is late then the meeting will have to start late, and if one is absent then they would have to reschedule the whole meeting, making it frustrating for others.

    Board portals can create notifications so that all the attendees won’t forget their meetings. It’s similar to push notification of social media sites.

    Smart data for smart meetings

    Proper data creates proper answers, proper answers create proper solutions, proper solutions save everyone’s time and effort.

    By collecting the record of the members, the meeting organizers can easily check the time and availability of each member. This way, they can make more time-friendly meetings and avoid unnecessary rescheduling.

    Remote access for success

    Stuck in traffic? Can’t go to work because of the unforeseen house problems? Flat tire? Not a problem.

    With remote access, users can easily access their meetings as long as they have Wi-Fi or mobile data and a device with a board portal app installed. This feature also eliminates venue issues and other possible distractions such as picking up the right clothes to wear or wishing that the streets are not riled with traffic.

    This is very important for emergency meetings, impromptu checks, and quick office huddles.

    A lean mean scheduling machine

    When you set the time, you have to start on time. Without a proper schedule, a meeting can be stretched, chopped up, or postponed. A good way to do this is to tell everyone that each item on the meeting has a schedule or you can set the board portal to do it for you. By creating a schedule, you can set expectations, answers, and feedback.

    Setting agendas like you mean it

    Without an agenda, a meeting is just as good as a friendly get-together. It sounds fun, but it will probably waste a lot of time and money.

    Being able to set the agenda and its documents is a lifesaver for any organizer. This sets the tone and gravity of the meeting. It also helps the attendees prepare and organize themselves so that they can participate in the meeting.

    Power tools for a power meeting

    Presentations are the interactive part of the meeting. Usually a presenter shows and explains the documents to make their point across.

    Board portals follow the same concept, with simple tools such as highlights, footnotes and drawing tools. These tools help the presenter give emphasis or directions on how to tackle a certain agenda item.

    A clear voting system

    A vote is like a thousand words— ­­it’s a compressed version of a person’s decisions, beliefs and response. Voting systems avoid a lot of possible chit chat, justifications, and possible shift of decision due to peer pressure.

    And it ultimately erases the discussion of company politics inside the meeting. If ever fellow members want to discuss about politics, then they would have to do it after the meeting.

    The goal is in your hands

    If we can shorten it, then do it. Think of board portals like Azeus Convene as the natural progression of meetings. They make meetings more objective, less repetitive, and highly interactive.

    Traditionalists might still go with face to face meetings, but with Convene as the meeting medium, you can now achieve paperless meetings, avoid wasting time, and solve agendas with a click.

    Start making your meetings more productive. Who knows? It might help you save $37 billion.

    The post How Can Board Portals Save You Time and Money? appeared first on All Peers.

    Matthew Gertner - AllPeersWill Identity Theft Be Your Business? [Technorati links]

    September 08, 2016 05:10 PM
    Will Identity Theft Be Your Business?Photo by CC user Marcos Tulio on publicdomainpictures.net

    Could your business withstand being the victim of identity theft?

    While some companies can survive such a matter, others would either need significant time to recover or would never recover at all.

    That said what is your business doing to steer clear of identity theft thieves?

    Close the Doors on I.D. Theft

    In order for your business to do its best in closing the doors on identity theft, keep these tips in mind:

       

    1. Plan – First and foremost, what kind of plan do you have set up to negate identity theft as much as possible? Unfortunately, some business owners are of the opinion that I.D. theft can’t happen to them, so they therefore do not have to guard against it. That line of thinking can be one of the most destructive ones possible, especially as identity theft thieves continue to try and exploit businesses and consumers at each and every turn. Always be on guard for identity theft, avoiding the idea that you and your business are untouchable. If you’re not protecting customer identities, you are setting yourself up for quite a fall;
    2.  

    3. Employees – Your workers play several roles when it comes to the identity theft and your business. First, they are a great line of defense against the problem, especially since they deal with clients on a firsthand basis. Make sure they stay cognizant of what is going on both online and off, looking for any red flags that may suggest your brand is being targeted for I.D. theft. Secondly, as much as you want to trust those you hire (and you should), there is always the possibility that one or more of your workers will in fact by identity theft criminals themselves. It should not come as a huge surprise that some businesses have been successfully targeted for I.D. theft by those right under their noses. As a result, the crime may go unnoticed for a period of time. If you suspect one or more of your workers are engaging in identity theft against you, an immediate investigation needs to take place. Remember, each day you let go by without looking into the matter is one more day you could lose money and/or clients;
    4.  

    5. Education – Being educated about the dangers of identity theft is a necessity, not a choice. As a business owner, you have a responsibility to not only your customers, but also your employees to keep your brand as removed as possible from I.D. theft. If an identity theft attack is successful against your business, it could put you and your team out of work (depending on the severity of it). Being you run a business, it is important that you are as educated as possible about how identity theft works, what types of businesses are typically targeted, and how to recover from such an attack without having to close up shop. There are plenty of articles online about how to combat identity theft, not to mention videos too. Follow up on a number of these pieces to learn more about whether or not your brand is significantly at risk;
    6.  

    7. Warnings – Finally, do you know the telltale signs of identity theft? If not, get up to speed sooner rather than later. For instance, if your company’s financial books are not adding up, there could be something fishy going on. The same holds true for any company credit cards showing differing balances than what they should. Also look at whether any employees have been acting strange as of late, especially those who may be charged with doing your accounting tasks etc.

     

    The negative fallout from even one successful identity theft attack against your business could be catastrophic, so do not take the matter lightly.

    If you are not guarding against identity theft, you are making it easier for such criminals to strike.

    When you have a protection monitoring plan in place to cover all of your financial undertakings, you educate yourself and your workers on the dangers of I.D. theft, and you regularly review your safeguards in place, you greatly reduce the odds of being the next victim. Will identity theft be your business? If you care about your livelihood, you certainly will. 

    The post Will Identity Theft Be Your Business? appeared first on All Peers.

    Mike Jones - MicrosoftInitial Working Group Draft of OAuth Token Binding Specification [Technorati links]

    September 08, 2016 04:24 PM

    OAuth logoThe initial working group draft of the OAuth Token Binding specification has been published. It has the same content as draft-jones-oauth-token-binding-00, but with updated references. This specification defines how to perform token binding for OAuth access tokens and refresh tokens. Note that the access token mechanism is expected to change shortly to use the Referred Token Binding, per working group discussions at IETF 96 in Berlin.

    The specification is available at:

    An HTML-formatted version is also available at:

    September 07, 2016

    KatasoftTutorial: Build a Spring WebMVC App with Primefaces [Technorati links]

    September 07, 2016 03:45 PM

    Primefaces is a JavaServer Faces (JSF) component suite. It extends JSF’s capabilities with rich components, skinning framework, a handy theme collection, built-in Ajax, mobile support, push support, and more. A basic input textbox in the JSF tag library becomes a fully-featured textbox with theming in Primefaces.

    Frontend frameworks like AngularJS provide UI components, Ajax capabilities, and HTML5 compliance much like Primefaces does. If you are looking for a lightweight application with quick turnaround time, AngularJS could be your best bet. However, when dealing with an enterprise Java architecture, it is often best to use a mature framework like Primefaces. It is stable and ever-evolving, with the help of an active developer community.

    Primefaces also makes a UI developer’s life easier by providing a set of ready-to-use components which, otherwise, would take a considerable amount of time to code – e.g., the dashboard component with drag and drop widgets. Some other examples are slider, autocomplete components, tab views for pages, charts, calendars, etc.

    Spring WebMVC and Primefaces

    In Spring WebMVC, components are very loosely coupled. It is easy to integrate different libraries to the model layer or the view layer.

    In this tutorial, I am going to walk you through using Spring WebMVC and Primefaces to create a basic customer management application with a robust frontend. All the code can be found on Github.

    Create a Maven Project

    Create a new Maven Project using your favorite IDE. After creating the project, you should see the pom.xml in the project folder. A minimal pom.xml should like this:

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    
    <groupId>com.stormpath.blog</groupId>
    <artifactId>SpringPrimefacesDemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>war</packaging>
    
    <name>SpringPrimefacesDemo</name>
    <url>http://maven.apache.org</url>
    
    <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>
    
    </project>

    Add Spring Libraries

    Next, add the necessary Spring libraries to the dependencies section of the pom.xml.

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <jdk.version>1.7</jdk.version>
        <spring.version>4.3.2.RELEASE</spring.version>
    </properties>
    
    <dependencies>
        <dependency> 
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
         </dependency>
    </dependencies>

    Create Your Sample Project with Spring WebMVC

    For the customer management application we are going to build, we need to create a mock customer database. It will be a POJO with three attributes. The Customer class would look like this:

    package com.stormpath.blog.SpringPrimefacesDemo.model;
    
    public class Customer {
    
        private String firstName;
        private String lastName;
        private Integer customerId; 
    
        public String getFirstName() {
            return firstName;
        }
        public void setFirstName(String firstName) {
            this.firstName = firstName;
        }
        public String getLastName() {
            return lastName;
        }
        public void setLastName(String lastName) {
            this.lastName = lastName;
        }
        public Integer getCustomerId() {
            return customerId;
        }
        public void setCustomerId(Integer customerId) {
            this.customerId = customerId;
        }
    }

    Then we need to create a bean class to manipulate the Customer class:

    package com.stormpath.blog.SpringPrimefacesDemo.presentation;
    
    import java.util.ArrayList;
    import java.util.List;
    
    import javax.annotation.PostConstruct;
    import javax.faces.bean.ManagedBean;
    import javax.faces.bean.ViewScoped;
    
    import com.stormpath.blog.SpringPrimefacesDemo.model.Customer;
    
    @ManagedBean
    @ViewScoped
    public class CustomerBean {
        private List<Customer> customers;
    
        public List<Customer> getCustomers() {
            return customers;
        }
    
        @PostConstruct
        public void setup()  {
            List<Customer> customers = new ArrayList<Customer>();
    
            Customer customer1 = new Customer();
            customer1.setFirstName("John");
            customer1.setLastName("Doe");
            customer1.setCustomerId(123456);
    
            customers.add(customer1);
    
            Customer customer2 = new Customer();
            customer2.setFirstName("Adam");
            customer2.setLastName("Scott");
            customer2.setCustomerId(98765);
    
            customers.add(customer2);
    
            Customer customer3 = new Customer();
            customer3.setFirstName("Jane");
            customer3.setLastName("Doe");
            customer3.setCustomerId(65432);
    
            customers.add(customer3);
            this.customers = customers;
        }
    }

    Create the Frontend with Primefaces

    Since we are going to add Primefaces components to our UI, we will need a UI with JSF capabilities. Add the JSF dependencies to your pom.xml:

    <properties>
           …..
            <servlet.version>3.1.0</servlet.version>
            <jsf.version>2.2.8</jsf.version>
           …..
     </properties>
    …
    
    <dependency>
        <groupId>javax.servlet</groupId>
        <artifactId>javax.servlet-api</artifactId>
        <version>${servlet.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-api</artifactId>
        <version>${jsf.version}</version>           
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-impl</artifactId>
        <version>${jsf.version}</version>            
    </dependency>

    Note: If your target server is a Java EE compliant server like jBoss, the JSF libraries will be provided by the server. In that case, the Maven dependencies can conflict with the server libraries. You can add scope provided to the JSF libraries in the pom.xml to solve this.

    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-api</artifactId>
        <version>${jsf.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-impl</artifactId>
        <version>${jsf.version}</version>
        <scope>provided</scope>
    </dependency>

    Create a web deployment descriptor – web.xml. The folder structure needs to be as shown below (the other files referenced will be created below):

    webapp/
    ├── META-INF
    │   └── MANIFEST.MF
    ├── WEB-INF
    │   ├── faces-config.xml
    │   └── web.xml
    └── index.xhtml

    web.xml content:

    <?xml version="1.0" encoding="UTF-8"?>
    
    <web-app xmlns="http://java.sun.com/xml/ns/javaee"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
             version="3.0">
    
        <servlet>
            <servlet-name>Faces Servlet</servlet-name>
            <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
            <load-on-startup>1</load-on-startup>
        </servlet>
        <servlet-mapping>
            <servlet-name>Faces Servlet</servlet-name>
            <url-pattern>*.xhtml</url-pattern>
        </servlet-mapping>
        <servlet-mapping>
            <servlet-name>Faces Servlet</servlet-name>
            <url-pattern>/faces/*</url-pattern>
        </servlet-mapping>
        <welcome-file-list>
            <welcome-file>faces/index.xhtml</welcome-file>
        </welcome-file-list>
    </web-app>

    Create faces-config.xml in the WEB-INF folder:

    <?xml version="1.0" encoding="UTF-8"?>
    <faces-config
        xmlns="http://xmlns.jcp.org/xml/ns/javaee"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_2.xsd"
        version="2.2">
    
    </faces-config>

    Add index.xhtml to the webapp folder.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
        </body>
    </html>

    Note the XML namespaces for the JSF included in the xhtml. Now we can add the the proper dependencies to pom.xml.

    …
    <properties>
        ...
        <primefaces.version>6.0</primefaces.version>
    </properties>
    
    ...
    <dependency>
        <groupId>org.primefaces</groupId>
        <artifactId>primefaces</artifactId>
        <version>${primefaces.version}</version>
    </dependency>

    Finally, add a class implementing WebApplicationInitializer interface. This will be a bootstrap class for Servlet 3.0+ environments, to start the servlet context programmatically, instead of (or in conjunction with) the web.xml approach.

    package com.stormpath.blog.SpringPrimefacesDemo;
    
    import javax.servlet.ServletContext;
    import javax.servlet.ServletException;
    import org.springframework.context.annotation.ComponentScan;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.web.WebApplicationInitializer;
    import org.springframework.web.context.ContextLoaderListener;
    import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
    import org.springframework.web.servlet.config.annotation.EnableWebMvc;
    
    @EnableWebMvc
    @Configuration
    @ComponentScan
    public class WebAppInitializer implements WebApplicationInitializer {
    
        @Override
        public void onStartup(ServletContext sc) throws ServletException {
            AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
            sc.addListener(new ContextLoaderListener(context));
        }
    }

    Configure Primefaces

    Now we will modify the index.xhtml file and create a data table to display the customer data. The xml namespace needs to be modified to add Primefaces reference.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:p="http://primefaces.org/ui">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
            <p:dataTable var="customer" value="#{customerBean.customers}" widgetVar="customerTable" emptyMessage="No customers found">
                 <p:column headerText="Id">
                     <h:outputText value="#{customer.customerId}"/>
                 </p:column>
                <p:column headerText="First Name">
                    <h:outputText value="#{customer.firstName}"/>
                </p:column>
                <p:column headerText="Last Name">
                    <h:outputText value="#{customer.lastName}"/>
                </p:column>
            </p:dataTable>
        </body>
    </html>

    Deploy to Your Application Server (and Test)

    Build the project, deploy the war to the application server and check.

    Extended Capabilities

    Modify the code as shown below to easily produce a sortable data table with filters. Add the following line to CustomerBean.java:

    private List<Customer> filteredCustomers;

    …and:

    public List<Customer> getFilteredCustomers() {
        return filteredCustomers;
    }
    
    public void setFilteredCustomers(List<Customer> filteredCustomers) {
        this.filteredCustomers = filteredCustomers;
     }

    Modify index.xhtml to:

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:p="http://primefaces.org/ui">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
            <h:form>
                <p:dataTable var="customer" value="#{customerBean.customers}" widgetVar="customerTable" emptyMessage="No customers found" filteredValue="#{customerBean.filteredCustomers}">
                    <p:column headerText="Id" sortBy="#{customer.customerId}" filterBy="#{customer.customerId}">
                        <h:outputText value="#{customer.customerId}"/>
                    </p:column>
                    <p:column headerText="First Name" sortBy="#{customer.firstName}" filterBy="#{customer.firstName}">
                        <h:outputText value="#{customer.firstName}"/>
                    </p:column>
                    <p:column headerText="Last Name" sortBy="#{customer.lastName}" filterBy="#{customer.lastName}">
                        <h:outputText value="#{customer.lastName}"/>
                    </p:column>
              </p:dataTable>
        </h:form>
    </body>
    </html>

    More on Primefaces

    All the UI components available with Primefaces have been showcased at Primefaces Showcase.

    Apart from the components extended from the JSF Tag library, Primefaces has a lot of versatile components and plugins known as Primefaces extensions. They are meant to make developers’ lives easier and our web pages more beautiful.

    And now, it’s time to add authentication to your Primefaces webapp! Learn more, and take Stormpath for a test drive, with these resources:

  • A Simple WebApp with Spring Boot, Spring Security, & Stormpath — In 15 Minutes!
  • 5 Practical Tips for Building Your Spring Boot API
  • OAuth 2.0 Token Management with Spring Boot and Stormpath
  • The post Tutorial: Build a Spring WebMVC App with Primefaces appeared first on Stormpath User Identity API.

    MythicsDramatically Increasing a Competitive Advantage with the Oracle Database Appliance [Technorati links]

    September 07, 2016 12:55 PM

    See a fantastic new client spotlight by Oracle and Mythics highlighting our customer Clinispace and their use of the Oracle Database Appliance (ODA) with expert…

    Matthew Gertner - AllPeersThe Beginner’s Guide to Starting a Blog [Technorati links]

    September 07, 2016 12:40 AM

    In case you didn’t know, blogs (a term that is a contraction of weblogs) first became popular during the 1990s when people started to write online articles about their favorite topics, such as traveling, food, sports, health, business, fashion and lifestyle choices. Since then blogging has gone from strength to strength. In fact, pick any subject at all and you’ll most likely find somebody, somewhere is blogging about it. If you like the idea of communicating information, knowledge, inspiration or skills, or even if you just want to entertain folk and make new friends, here are a few tips to get you started.

    cup-mug-desk-office

    Your domain name

    If you already know what you want to write about then you need to think about bagging some personal space on the internet and for this you’ll need a domain name. Some people use their own name and others go with a preferred theme if that’s appropriate, opting to use this in the title. Examples might include parenting, cooking, hobbies or pets for instance. If you want to create a community that shares interests or experiences, you might opt for using a title that reflects this – our own website focuses on the meeting of minds, for example.

    Remember that you may find your first choice is not available: after all if you have opted for a really good title someone else may have taken it. For a personal website you can always add a middle initial or extra dashes if your first choice is already in use. To register your new domain you can opt for a free site, such as Blogger, Word Press or Tumblr. You’ll get really helpful tips on these sites about how to lay out your text, place images and screen videos to create the effect you want. Take the time to visit a few sites you really like and make a mental note of how they organize their look.

    Your posts

    Once your blog is set up you can start writing and posting your articles to it. As with any word processor software you will have options to choose the titles of your posts, the fonts, justification and colors. Your blog menu will also contain choices for uploading and editing images and with a little practice you’ll be able to assemble posts that look as good as they read.

    If you are going to write about a specialist subject, such as health and fitness for example, it’s best to take some time to do proper research. After all, you don’t want to find you are giving people incorrect information, or merely rehashing the contents of someone else’s blog.

    In fact, the quality of your blog content – what and how you actually write, plus how you present it – makes a tremendous difference to the amount of attention it will get. In terms of presentation you need to use great images and short but sharp videos to make your content interesting and catch the eye of viewers. Use a reputable company such as Dreamstime as a resource for top quality stock images and video footage. If you get it right, a picture really does paint a thousand words.

    Promoting your blog

    This is an area where you can’t afford to be shy. If you don’t promote your blog, no one will read it and you may feel you might as well not have bothered in the first place. Instead, use all the free channels at your fingertips and make the most of your social media connections. Most programs allow you to link to social media outlets such as Twitter and Facebook as soon as you publish a post. There’s also no harm in sending out a gentle reminder to friends and followers in case some people missed your announcement first time round.

    Don’t be afraid to comment in a useful way on other blogs that are similar or on the same topic. That’s how you gather new followers. If you comment often you can also link to another blog, creating a ‘trackback’ link to your own blog.

    Some bloggers like to offer guest posts to others. These means you could write something for another blogger’s site and promote your own site at the same time. In return, you can also offer this facility to other bloggers, especially if they are writing about similar themes or topics. It’s best to make sure your ideas are more or less in tune with theirs, of course.

    Online forums are also a good way to spread the word about your own writing. You will most probably find that if you make useful or insightful comments on a regular basis, then people will want to know more about who you are and what you write.

    Regular posting

    This is one of the hurdles that some would-be bloggers never manage to overcome. As well as aiming for high quality writing and top class images you also need to post regularly. Every couple of days is best but at least once a week is essential, as otherwise people will simply forget that you and your blog are there. Frequency of posts is also a way to ensure that people will make repeat visits to your blog, in case they’ve missed anything.

    Earning money from your blog

    Finally, as a beginner blogger you may be beguiled by the idea that you can actually make money from your blog. This is certainly true providing you have achieved a sizeable following – advertisers are willing to pay you for the opportunity to place ads on your site. The level of visibility of these ads has to be carefully judged, however, because you won’t want to put off site visitors if the ads are too dominant.

    Also, your chosen topics or themes will make a difference because advertisers will use these to judge the demographic your blog is reaching. For example, business or finance topics are likely to appeal to those with a healthy income, whereas a blog about boy bands is likely to attract young teenagers who are less interesting to advertisers in this respect.

    The post The Beginner’s Guide to Starting a Blog   appeared first on All Peers.

    September 06, 2016

    Matthew Gertner - AllPeersKaren Phillips About The Work Done At Phillips Charitable Organization [Technorati links]

    September 06, 2016 07:34 PM

    There are many different charitable organizations around the world. The Phillips Charitable Organization is one of them. It is a foundation that is really interesting and that did manage to help many people in the past few years. Karen, the wife of Charles Phillips, Infor’s CEO, talk to us a little about the work that is done and about the Phillips Charitable Organization. Here is what we found out.

    The Phillips Organization (full name Karen And Charles Phillips Charitable Organization) is basically a 501(c) non-profit business created in order to offer as much financial aid as possible for the disadvantaged students and the single parents interested in the world of engineering. At the same time, because of the fact that Charles Phillips served in the Marine Corps in the past, the organization does offer help to the wounded veterans. The PCO board includes Karen and Charles Phillips and 2 of their closest friends: Young Huh and Eric Garvin.

    maxresdefault

    Most of the programs that are developed by the organization are based on grants. There are grants that are offered to the students that are in dire need of financial aid and that present a really high potential talent. In the past 2 years alone we saw this organization offer over one hundred grants. They were highly successful and did help the students to reach great results, with most of them being considered as being the wave of the future in US engineering.

    The work that is done through this foundation is not something new for the PCO board members. All four are friends and did offer a lot of money to traditional charities in the past years. The traditional organizations were all really helpful when they were the donor. The effort that was put in donating was definitely not much but these 4 friends felt that they can get more involved. Large organizations are unfortunately filled with bureaucracy. Karen and Charles Phillips did not want to go through that bureaucracy.

    What is interesting with the charity is that it was created in basically the same way that Charles does business. The focus was put on the working environment and on efficiency while minimizing running costs. This means that the finances necessary to run the charity are much lower than with other companies. Obviously, Infor based cloud apps are actually used in order to improve charity success.

    The charity that was created by the friends does not have administrative overhead. All the decisions can be done really fast. The interest groups that appear are really easy to analyze. To put it as simple as possible, all the board members can be involved at a personal level. Why not take advantage of such a situation? It is so much easier to offer grants to those that are in dire need when the number of people that are responsible for making the choice is lower.

    The work of the organization is going to continue in the future. It will definitely be something that we have to look at.

    The post Karen Phillips About The Work Done At Phillips Charitable Organization appeared first on All Peers.

    Mike Jones - MicrosoftSecond public draft of W3C Web Authentication Specification [Technorati links]

    September 06, 2016 04:54 PM

    W3C logoThe W3C Web Authentication working group has announced publication of the second public draft of the W3C Web Authentication specification. The working group expects to be issuing more frequent working drafts as we approach a Candidate Recommendation.

    CA on Security ManagementHow security enables digital transformation [Technorati links]

    September 06, 2016 02:00 PM
    Apparently, many enterprises still view security and innovation as opposing forces that need to be chosen between or—at best―balanced. Reading a recent CIO article titled,… The post How security enables digital transformation appeared first on Highlight.

    &nbsp;

    Paul TrevithickAdaptive, responsive and mobile friendly sites [Technorati links]

    September 06, 2016 10:56 AM

    banner-1023856_960_720

    Are adaptive, responsive and mobile friendly all the same? The answer is not quite. Let’s take a look at the differences between them.

    Difference between responsive and friendly sites

    When you think of a mobile friendly site, many people assume that it’s specifically designed for a mobile device. However, they are in fact a website interface that works with all kinds of devices.

    So the difference between a responsive and friendly site? A responsive mobile site will alter its view based on which device it is viewed on. As for a mobile friendly site, a standard desktop site will look the same but only that it is shown on a smaller scale. A responsive mobile site will show and change into a single-column design that will fit on the device screen.

    In simple words, a responsive site is always mobile-friendly. They have a similar feature of a mobile friendly site. But the main difference is that a responsive website shows a better navigation of spacing and will always adapt to a mobile operating system.

    Difference between adaptive and responsive

    An adaptive and responsive sites are pretty similar but different in practice. The similarity is that they both change their dimensions based on the device it is viewed on. The main difference between them is that a responsive site will adjust to any layout. On the other hand, the adaptive sites will only adapt to selected points.

    Which type of site should you use?

    All in all, your choice will depend on what kind of site you have and where you get most of your traffic. For instance, unless your traffic is largely viewed on a mobile device, then you may want to opt in for an adaptive or responsive site. However, if your traffic is low on mobile devices, then it’s advisable that you should just simply opt in for a mobile-friendly site. An adaptive and responsive sites aren’t always necessary in this case.

    The post Adaptive, responsive and mobile friendly sites appeared first on Incontexblog.org.

    August 31, 2016

    Matthew Gertner - AllPeersMissing The Sun, Sea and Sand? Simple Tricks To Get that ‘Beach Look’ For Your Home [Technorati links]

    August 31, 2016 09:22 PM

    Don’t you wish you could spend all day, every day on the beach? Well, you can – sort of!

    If you love nothing more than feeling the sun on your back and sand in your toes as the turquoise water laps at the shore and the palm trees blow softly in the summer breeze, why not re-create it in your home?

    The beach is a happy place for many – a place where the stresses and worries of everyday life are forgotten and relaxation ensues – a place where loved ones can have fun and spend quality time together. The way you feel on the beach is exactly how you want to feel at home, isn’t it?

    So, whether you live near the beach or just dream about the ocean, here is how to get the look for your home.

    Colour Scheme:

    White is a go-to colour for beach houses – it gives a clean, calm and luxurious feel, but pastels also work really well too – particularly blue, greens, corals and yellows.

    Pops of turquoise are always a good idea throughout the house to bring the tranquility of the calm ocean waters inside.

    Wood detail and rustic accents are perfect to complete the look, as are beach-themed stencils and decal.

    These stencils could be things like words that really spell out your love for the seaside, such as ‘life is better at the beach’, or images such as shells, anchors, starfish, beach huts, palm trees and so on. These can be used in any way you like, perhaps a large image that becomes the focus of the room, or smaller ones that are incorporated subtly across the space.

     Furniture:

    When you are choosing furniture to fit in with your beach theme, opt wicker as well as reclaimed and rustic wood.

    You could, of course, buy the base of your furniture and then customize it to your preferred beach look.

    For example, you could take a look at www.divancentre.co.uk to get the base for your bed and then you could create a headboard, perhaps in the shape and colour of waves, from reclaimed wood.

     The little things make a big difference:
    If you don’t have a view of the beach, seaside prints are the perfect alternative.
    You could even blow up your favourite photo from your own beach trip and perhaps have it put onto canvas.

    Fairy lights are a great addition to beach-themed rooms, as their sparkles will emulate the stars at night.

    Finish the look with nautical ornaments from boats to starfish. You could even swap everyday items – shells in place of door and cupboard handles, perhaps? This means the theme continues in the most unlikely of places.

     DIY:

    When you are on the beach, why not gather together some sand, shells, driftwood and stones and create your own, unique beach-inspired ornaments?

    You could put your smaller stones and/or sand into a glass jar – the perfect spot for a candle.

    Shells are great for decorating a range of household items, from mirrors to vases.

    Likewise, rope is always good to give a beachy effect, so use it for hanging things or, like the shells, it could line mirrors or wall art.

    The post Missing The Sun, Sea and Sand? Simple Tricks To Get that ‘Beach Look’ For Your Home appeared first on All Peers.

    Matthew Gertner - AllPeersSmart Renovations: Get A Better Price For Your Home With These Quick Fixes [Technorati links]

    August 31, 2016 06:22 PM

    The need for renovations arise every few years, but it renovating need not be just about maintenance. There’s a lot of ways to add value to your house by renovating smartly.

    In a recent interview, property renovation expert Cherie Barber shared her views on how homeowners can add value by renovating. “Focus on what’s visible,” says Cherie, “concentrate on the areas buyers love.”

    Home Repair after Flood

    By focusing on certain crucial parts of the house, you can generate a substantial return on your renovation investments. So, if you plan on getting the house redone in the near future, here are some quick fixes you should probably focus on:

    Focus on the Look and Feel

    Cherie recommends homeowners focus on the look and feel of the property to greatly boost its value. A fresh coat of paint or an updated look for the front entrance is likely to create an inviting atmosphere. A property that sets a good first impression is likely to fetch a much better price from a buyer. Buying a home is, after all, an emotional experience. Set the right mood and you’ll do wonders for the property’s value.

    Bathrooms

    Bathrooms are one of two essential parts of the house that can make or break the selling price (we’ll get to the other one in just a bit).

    Bathrooms need to be sparkling clean and updated with the best fixtures. Homeowners need to aim for luxurious and modern looking bathrooms. Focus on providing ample storage and lots of space. Small amenities like his and her vanities go a long way too.

    Kitchens

    Kitchens are arguably more important than bathrooms when it comes to selling a property. “It’s the engine of the whole house,” believes Cherie Barber. The quickest way to add value to your kitchen is to add in an island. An island bench can add space and create a hub for the entire family, which is really attractive to a homebuyer.

    Space

    The best way to get the most bang for your buck while renovating is to try and add space to the property. The cheapest way to create more space is to minimize the furniture and change the layout. However, if your budget allows for an added bedroom, you can boost your property’s value by $30,000 to $150,000 depending on where you live. Space is the single most sought after feature of a property and adding more space is never a bad investment.

    Essentials

    A fresh coat of paint and new light fittings are all essentials when you’re trying to sell a property. These renovations don’t cost a lot and are very likely to be noticed by homebuyers, which is what makes them so crucial. Go for energy efficient LED lights wherever possible (eco-friendly homes get a better price) and a lively color scheme throughout the house for best effects.

    Renovating is almost a necessity when you own property, but with a well thought out plan you can make the most of your investment and add value to your home. Take a smarter approach to renovations and you’ll fetch a better price for your property when it’s time to sell.

    The post Smart Renovations: Get A Better Price For Your Home With These Quick Fixes appeared first on All Peers.

    August 30, 2016

    Matthew Gertner - AllPeersNatural Ways To Replenish The Energy You Lost [Technorati links]

    August 30, 2016 05:02 PM

    Daily energy is really important for all of us. It is vital that your daily energy levels are as high as they need to be in order to perform the daily demands of the body, the family life and the work that you do. Jason Camper highlights that replenishing the energy that you lost is not at all something that is simple., It is a process that lasts much longer than what many think. You will need to be sure that you always do what it takes. Thankfully, there are so many natural ways available for those that want to replenish lost energy. They are going to be discussed in the following lines.

    Take A Fast Walk

    One of the easiest ways to replenish your energy sources is to take a pretty short work. If you walk around at your pace for just a quarter of an hour, you will end up with enough energy to last you for one hour and a half. It is something that is counter intuitive by many since you end up spending energy as you walk. However, after you try it you will realize the fact that this is something that helps out much more than what you initially thought.

    maxresdefault-1

    Meditation

    Just sit back, let the muscles rest, relax and make sure that the cells inside your body will be filled with that all important oxygen. When you are tense, the cells end up being starved for this important nutrient. That means that energy is not going to be produced in an ideal way. As you stay and meditate combined with deep breathing, the body will end up generating much more energy as it starts working as it should again.

    Start Writing What Bothers You

    This is quite an interesting trick that you should take into account. Stress and tension are normally the reasons why the mind ends up wondering and why you worry. Take a piece of paper and write down everything that bothers you or that creates stress. When you do this you instantly feel better and you will notice that energy levels go up. This is actually the precise way in which the asthma patients are enhancing lung function and how rheumatoid arthritis patients manage do deal with the pain. Writing what bothers you basically releases that tension that is stress-induced.

    Pay Close Attention To Hydration

    This is something that so few people know, although everyone will tell you that they know how important it is to remain hydrated. When you are dehydrated, the body ends up being more fatigued. All that is necessary to get rid of the fatigue and end up with an almost instant energy source is to drink water. You should always go for as much water as the body requires. Do not believe the 7 glasses per day rule or something similar that some will tell you. Whenever you feel thirsty, drink and your energy will be replenished. The great thing about this trick is that you can use it several times per day, whenever you feel a little thirsty.

    The post Natural Ways To Replenish The Energy You Lost appeared first on All Peers.

    Matthew Gertner - AllPeersTop Podcasts and Online Radio Shows on Wealth Management [Technorati links]

    August 30, 2016 03:34 PM

    Podcast and digital radio have come a long way with over a billion downloads and subscribers on Apple podcasts alone.

    Podcasts and digital radio is an excellent way to learn more about specific topics such as wealth management. They’re convenient to listen to while driving and easily accessible from your iPod, smart phone, and even your computer.

    10960938633_a0e7007b0f_b

    Here is a list of some of the best shows available for streaming right now.

    BiggerPockets Podcast

    Rated as the number one real estate podcast on iTunes, the BiggerPockets Podcast is hosted by Josh Dorkin and Brandon Turner who deliver interviews, and tips to their listeners each week for those looking to grow their real estate business.

    The show is popular because the advice, tips, and the information is full of real, practical advice. If you are thinking about starting a real estate career or want to brush up on the goings on in the real estate industry, this could be the podcast for you.

    Smart Money with Keith Springer

    Invest for need, not greed is the motto of this twice-weekly broadcast by financial advisor Keith Springer. Recent podcasts include 5 Secret Do’s and Don’ts that Drive Successful Investors, How the 2016 Tax Code Changes Will Affect You!, The Top 10 Secrets Retirees Don’t Tell You and an interesting podcast with Two Superstar Billion Dollar Money Managers. You can find the podcast on iTunes or listen live on Saturday at 1 pm and Sunday at 6 am each week.

    The Clark Howard Show

    A longstanding name in the world of personal finance Clark Howard is an expert on financial matters and a host of a podcast and radio show. The syndicated “Clark Howard Show” covers how to save money, spend less money and avoid the many consumer rip-offs. You can listen live and even call into Howard who is on the air every day Monday to Friday or you can listen to his podcasts at your convenience. This is a great show for getting straightforward advice on saving money and preparing for the future.

    Freakonomics Radio

    The Freakonomics Radio Show is an extension of the popular

    “Freakonomics,” and “SuperFreakonomics” books that were co-authored by journalist Stephen Dubner and economist Steven Levitt. An award-winning weekly podcast (with millions of downloads a month) Freakonomics Radio airs on public-radio stations across the country. On Freakonomics Radio, Dubner uncovers “the hidden side of everything” and he routinely covers topics ranging from racially profiling employees to how to win games and beat people. The podcast covers how to think creatively, rationally and productively, particularly about finances and other resources.

    The post Top Podcasts and Online Radio Shows on Wealth Management appeared first on All Peers.

    Matthew Gertner - AllPeersTips for How to Become an Engineer [Technorati links]

    August 30, 2016 02:18 PM

    The World of engineering can provide you with a fantastic career filled with innovation, design, job security and for the most part, an excellent salary. The basic requirements to be an engineer are that you have a creative mind and a strong understanding of maths and science, you should also have a passion for it, like anything, if you are not passionate in what you are doing then you are unlikely to be successful and there really is no point in doing it at all. If you meet the criteria and are considering engineering as a viable career path for you then here are some tips on how to get into the industry.

    Learn From Those Who Have Done It

    On your journey to become an engineer it is important that you allow yourself to be influenced by those in the industry. Successful people like Anura Leslie Perera for example can provide great inspiration, a man who has worked in many fields of engineering such as construction and ship building and who now owns a very successful aerospace engineering firm. Looking at how people like Anura have gone about their career can provide you with a great model to follow.

    Education Requirements

    When it comes to education it is important that you work hard at gaining strong results in maths and science, these are the cornerstones of engineering regardless of which sector you plan to go into. If you are looking towards going into computer engineering then naturally IT should also be studied at high school level. When it comes to colleges, unlike many fields of work, there isn’t as much emphasis on which college you attend when it comes to engineering jobs. Attending a college like MIT will increase your opportunities in the jobs market and help you to demand a higher salary but it is not a prerequisite.

    Helping Yourself

    As with many careers it really pays to put in your own work in away from the classroom, when it comes to engineering you should be a 24 hour student. Having side projects that center around your chosen field of engineering will help you to keep your mind focussed on engineering and improve your ability to see projects through from beginning to end. You should be trying to make friends and contacts within the industry, there is no harm in emailing a group of professionals asking for their help and advice. If you start building up a network early on it can pay great dividends in the future.

    Widen Your Abilities

    When it comes really succeeding in the engineering industry it takes more than just being a great engineer, it is also important that you have a wide variety of skills. These skills can be business acumen, leadership ability, interpersonal skills or knowledge of a wide variety of sectors, if you want to stand out when it comes to getting a job then it is vital that you have plenty of strings to your bow.

    The post Tips for How to Become an Engineer   appeared first on All Peers.

    KatasoftDesigning the Stormpath SDK for Asynchrony in .NET [Technorati links]

    August 30, 2016 11:19 AM

    We designed the Stormpath .NET SDK with asynchrony in mind. Since the goal of the SDK is to make network calls to the Stormpath API, it’s a great fit for the Task-based asynchrony pattern introduced in .NET 4.5. Every network method returns a Task<T>, which can be awaited to get the result.

    Native support for Tasks in ASP.NET and ASP.NET Core means that your application can intelligently pause threads that are waiting on asynchronous operations, which increases performance.

    Embracing the Task pattern in the SDK has the side benefit of making it clear when and where network access will occur in your code: if a method doesn’t return a Task, it won’t make a network call. Nice and readable, just like it should be.

    However, using Tasks in a library leads to a problem: what happens when your consuming code can’t use await?

    The problem of blocking

    It’s possible to synchronously block on an asynchronous Task by calling task.Result or task.Wait(). However, it’s a really bad idea. In a web application, it can lead to deadlocks. Don’t do it!

    The await keyword provides a way to wait for a Task without actually blocking, using compiler-generated magic continuations. This requires you to mark the method body as async. In most cases, this is the perfect solution. The only problem areas are:

  • Existing applications that can’t use async without significant refactoring of existing code.
  • Methods that cannot be marked as async, like void Main() or OWIN Startup methods.
  • These edge cases don’t happen often. However, when they do, the library experience is poor: the developer is forced to use a bad pattern (blocking) without any other option.

    To provide a solution for situations where the SDK needs to operate synchronously, we decided to implement every relevant SDK method twice — once as an asynchronous method, and once as a (natively) synchronous method. We were inspired by StackExchange’s Dapper library, where they call it “dual-stack design”. Entity Framework 6 and later also uses this pattern.

    For example, on the IApplication interface, there are two methods that represent the same operation:

    public interface IApplication
    {
      public Task<IAccount> CreateAccountAsync(...);
      public IAccount CreateAccount(...);
    }

    Providing two versions of each method solves one problem, but introduces another: now the interfaces are bloated with similar-looking methods, which could be confusing for a newcomer. (Should I use CreateAccount or CreateAccountAsync? Why are there two?)

    I’m a big believer that SDKs should guide developers toward best practices whenever possible. Using the asynchronous method is a best practice, but a synchronous method sitting on the interface is so tempting! What if the synchronous methods were only visible when you needed them?

    Hiding methods behind a namespace

    To create an “opt-in” experience for the Stormpath SDK’s synchronous methods, we used C# extension methods to implement a mixin pattern and hide the methods behind the Stormpath.SDK.Sync namespace.

    Now, instead of both methods living on the interface as shown above, the synchronous method lives in an extension class:

    namespace Stormpath.SDK.Application
    {
      public interface IApplication
      {
        public Task<IAccount> CreateAccountAsync(...);
      }
    }
    
    namespace Stormpath.SDK.Sync
    {
      public class ApplicationSyncExtensions
      {
        public IAccount CreateAccount(this IApplication application, …)
        {
          // (sync implementation)
        }
      }
    }

    Now the synchronous “overloads” are only available if the developer imports the Stormpath.SDK.Sync namespace at the top of their code file. Otherwise, they aren’t visible.

    Why our solution?

    I like the solution that we used in the Stormpath .NET SDK because it:

  • Suggests async best practices by default
  • Supports the edge cases where asynchrony isn’t available
  • Exposes additional behavior in an intuitive way
  • If you have any thoughts or critiques, share them with me on twitter or below in the comments! And, if you’re interested in learning more about the Stormpath .NET SDK, you can check out these resources:

  • The .NET SDK Documentation
  • Simple Social Login in ASP.NET Core
  • 10 Minutes to User Authentication in ASP.NET
  • The post Designing the Stormpath SDK for Asynchrony in .NET appeared first on Stormpath User Identity API.

    IS4UMIM2016 Troubleshooting: MIM Portal Performance Issue [Technorati links]

    August 30, 2016 09:13 AM

    Issue

    After experiencing a decrease in MIM portal responsiveness after installation, I checked the server resources to see following memory consumption: task manager

    Solution

    The solution to this problem is quite simple. Since MIM is not using any search capabilities of the underlying Sharepoint engine, we can just remove the search component. You can do this either via the Central Administration or via Powershell.

    $spapp = Get-SPServiceApplication -Name "Search Service Application"
    Remove-SPServiceApplication $spapp -RemoveData
    Remove search application

    Related resources

    August 29, 2016

    Matthew Gertner - AllPeersWhat You Should be Doing if You Plan to Invest [Technorati links]

    August 29, 2016 11:17 PM

    Investing has long been a great way to make your money work for you, if you are sitting on some savings then you should consider what investment opportunities can do for you and your wealth. It may seem a little scary at first, placing your money into what is essentially a gamble, once you get started however you will see that it can be very rewarding. Regardless of whether you have strong knowledge of the markets or not, you can still get started with various forms of investment and look to increase your savings. If investment is something that you’re looking to get involved in then here are the things that you should be considering.

    stock-investing

     Personal or Private

     One of the first decisions that you need to make is whether you will control all of your investments or if you will hire a private company or fund to do your investments for you. Unless you know your market of choice very well then I would recommend that you used professionals. My investment manager Javier Garcia Teruel Avila is incredibly experienced in the finance industry, he has spent much of his career in private equity investment and gained an MBA from Harvard University. Juan, has helped me gain some strong returns over the years and more importantly, I have faith that he will invest my money wisely. If you are going to go private then you need to ensure that you have faith and trust in the company that you use.

     What do You Want to Gain

     It is important to decide what you are looking to make out of your investments, not necessarily a financial figure, more a time frame and what kind of percentage yield it is that you want from your money. You could opt to invest low-risk and look to gain regular dividends over a long period of time, alternatively you could be looking for a riskier strategy that makes you faster money by buying and selling. Once you know what you are looking to get out of your investments then it will be easier to form a strategy for how you will approach the market.

    Studying

     Even if you plan to use a professional to make your investments for you, it is imperative that you not only have a strong understanding of how the market works, but stay well informed of its daily movements and potential impacts. Not having sufficient market knowledge will mean that you will be blindly investing, a certain way to lose money, if you are working with professionals then you will leave yourself open to the possibility of people taking advantage of you. Make sure that you study the market that you want to go into and make the effort to check daily what is happening with your investments.

     Caution

     Finally, it is important that during your first year at least, that you approach investment with caution, keep your level of investment low and make sure that you can afford to lose a percentage of however much you decide to invest. Once you have found your feet in the market then you can take on a different strategy but my advice would be to play it safe when you first start out.

    The post What You Should be Doing if You Plan to Invest   appeared first on All Peers.

    Matthew Gertner - AllPeersWhere Will Your Next Great Vacation Be? [Technorati links]

    August 29, 2016 09:16 PM
    Where Will Your Next Great Vacation Be?Photo by CC user toasty on Flickr

    Deciding where to go on a vacation can sometimes leave you feeling like you need a vacation just planning one.

    Whether you are sticking close to home or heading to a different part of the world you’ve never seen before, the choices can certainly be tantalizing.

    That said taking the time to plan the best vacation possible is something that you should never take for granted.

    As you plan your vacation of choice, know that there are companies out there waiting to help you get it right the first time around.

    So, where will your next great vacation be?

    Put the Plans in Motion Today

    In order to nail the best trip possible, here are a few pointers to not miss out on:

     

    As you make plans for your next fun-filled vacation, remember to lean on the pros for help.

    Along with their experience in helping travelers around the globe find experiences of a lifetime, you too can do your part, turning to the worldwide web for assistance in mapping out your trip.

    Whether your trip is soon or later down the road; take the time to properly plan it out, leaving less chance for a dream vacation turning into a nightmare.

    With all the money that goes along with a vacation, you want to make it a great one.

    The post Where Will Your Next Great Vacation Be? appeared first on All Peers.

    Matthew Gertner - AllPeersWhy Do So Many Business Leaders Fail? [Technorati links]

    August 29, 2016 09:10 PM

    It is interesting to notice the fact that most of the successful business leaders of today were faced with some sort of failure in the past. That is something that became quite common in business. As an example, previously president of Oracle, Charles Phillips is now Infor’s CEO. When he led Oracle, he did many great moves but he did have problems, mainly caused by others. He learned from that even if it was not a failure. He treated it as such and when he became CEO of Infor many changes were made. This led towards the huge growth that Infor sees right now, quickly growing from 4 employees to number 3 in the ERP service provider market.

    Charles-Phillips-CEO-Infor

    The success of a company is basically highly connected with the leadership skills of the manager and the owner. However, many business leaders fail. Let’s see the most common reasons why this tends to happen.

    Not Listening To The Complaints Of The Employees

    The people that fuel the growth of a company are not the business leaders. They are basically the facilitators, those that help the growth happen. The backbone is always the workforce. It is really important that a business leader listens to the staff members. When this does not happen, it is a certainty that morale will go down. If there are employees that complain, listen to them. See what causes the complaint and see if there is something that can be done to improve the working environment.

    Lack Of Planning

    There is a lot of talk about vision these days. Successful business leaders always have a great vision but that is never enough for success. It is also really important that the steps necessary to actually achieve the growth in the future are taken. This automatically involves planning. The business leaders that are successful will always take their time to plan all the steps that will be taken to improve growth speed.

    Stopping The Learning Process

    You can be a really great business leader today and tomorrow you end up making many mistakes that make staff members lose all respect for you. It is really important as any person in a leadership position to keep learning. Do not believe that you know everything. Business can change from one month to the next. You need to be sure that you are going to always see what you can do in order to improve your personal skills. There are always things that you can work on.

    Positive Attitude Lacks

    When you are negative, you cannot be a successful business leader. This is something that absolutely nobody should neglect. Attitude can be changed and you can learn how to be a more positive person. Business leaders that have a negative attitude will surely end up faced with employee related problems. Make sure that you always look at the bright side of things. If something bad happens, highlight that in a positive way. There is such a thing called constructive criticism. This comes out of a positive attitude.

    The post Why Do So Many Business Leaders Fail? appeared first on All Peers.

    Matthew Gertner - AllPeersWill Your Resume Properly Define You? [Technorati links]

    August 29, 2016 07:54 PM
    Will Your Resume Properly Define You?Photo by CC user 124247024@N07 on Flickr. Image courtesy www.flazingo.com

    When you stop for a moment to think about it, your resume is as important a document in your life as you will ever have.

    With a winning resume, you open yourself up to myriad of job opportunities over the years, opportunities that can leave you with a career one day to look back on with much happiness and pride.

    On the other side of the coin, a resume that is average at best or worse can leave you with a lot of broken dreams, something that can haunt you many years from now.

    That said how can you position your career for good things to happen now and down the road?

    While hard work and dedication of course are the biggest components of that success, having a top-notch resume to lead you to quality jobs and opportunities is imperative.

    So, will your resume properly define you?

    Go to the Pros When Necessary

    In the event your current resume is leaving you feeling like something is missing, don’t wait around to figure out how to improve it.

    When you turn to a professional resume writing service, you know that you have professionals in your corner, professionals who will see to it that your resume is given the utmost care.

    One of the first questions you are likely to have is how do you go about finding such a service in the first place?

    In today’s Internet-driven world, starting your search online is a good way to go.

    By doing a Google search of resume writing services or using information you found through family and/or friends, check out different resume writing service websites.

    You want to find those who have the experience of doing the job right the first time around, along with providing stellar customer service.

    Once you have found the service you feel is best for your job pursuits, sit down with the pros and get started.

    As any professional resume writer will tell you, your resume should bring out only the best in what you have to offer prospective business owners.

    Yes, you obviously went to school, perhaps even graduated with a four-year degree or more. That said don’t waste too much time focusing on grades and school achievements, instead zeroing in on your prior job experience. In the event you are going after your first-ever full-time job, try to at least highlight relative part-time jobs and/or internships in your resume.

    Being Active in the Digital Age

    Another important item to keep in mind is the importance of today’s digital age.

    Many business owners are yearning for applicants who get the Internet and have no problem moving around on it, especially when it comes to areas such as social media.

    If you have Internet skills, by all means make sure they are highlighted on both your resume and in your cover letter.

    Even though you may end up with a job that is not immediately tied to marketing and/or advertising/sales, you may be asked by your employer to help promote the company’s brand on social media. Being able to do so will increase your chances of getting and keeping a position with a business that gets how important social networking is in today’s world.

    Finally, although paper resumes have not quite gone the way of the dinosaurs, they are becoming less and less the norm in today’s business world.

    As a result of this, having an online resume that shines is vital to your chances of landing the job that you really want.

    One of the advantages to going with an online resume is that you can go into your computer whenever necessary and update the resume. This makes things much easier in the event you want to send off resumes to a bunch of jobsites and companies at the last minute, albeit needing a change or two on your document.

    When it comes to finding a job in 2016 (and beyond for that matter), winning resumes and cover letters still matter a great deal.

    If you are not 100 percent confident in your abilities to turn out such documents, go the pros.

    In the end, it will be one of the best moves you ever made.

    The post Will Your Resume Properly Define You? appeared first on All Peers.

    Matthew Gertner - AllPeersVacation Rentals vs Hotels, Which is the Best? [Technorati links]

    August 29, 2016 06:45 PM

    Vacation rentals have gained great popularity in the last decade and more people are shunning hotels in favor of a privately rented property or home. Websites such as AirBnb allow peer-to-peer rentals whereby people rent out their own houses or apartments for short term stays, this method of private rental has really struck a chord with holiday makers. There are also people like Brian Ferdinand Liquid Holdings’ president who works with vacation rental properties, this sector has also seen a dramatic rise in popularity as people seek something different than a hotel. Here we look at which is the best option for you, a hotel or a vacation rental.

    1280px-Hotel_room_beds_at_GRT_Temple_Bay_Resorts,_Mahabalipuram

     Space

    The space that you pay for in a hotel is usually just a room and a bathroom, there are of course many areas for you to enjoy in a hotel such as the pool area, restaurant, public spaces and offices but these are shared spaces.

    In a vacation rental, all of the space is your own from the living area to the kitchen, you can feel comfortable and relaxed in the knowledge that nobody can interrupt your space. Many private rentals offer their own pool, perfect for a secluded swim.

    Price

     To work out the best price option for you depends very much on how many nights you are going away for. If your plan is to go away for 2 or 3 nights then a hotel will give you far better value, the reason for this is that most vacation rentals charge discounted prices for longer periods of time.

    If you plan to go away for more than three nights then the rental is the best choice for you and your wallet. Equally, if you plan on going away in a group or with a large family then a rental will also offer you the best value as the cost of individual hotel rooms will be far larger than the cost of a big property that accommodates the whole group.

    Food

    Most hotels have restaurant options and a pool side snack bar where you can eat, many are also situated in busy areas surrounded by local restaurants and eateries giving you a great deal of choice for where to eat. With this however, comes the added costs of restaurant food and eating out in general.

     At a rental property, you of course still have the option of eating out in restaurants but with the added option of cooking at home. Cooking on your property can be done easily in fully equipped kitchens and barbecue areas. Cooking a few meals when your away can be a cost effective way of eating and having this option gives you the flexibility to eat when you want and what you want.

    Amenities

    Hotels can offer swimming pools, tennis courts, bars, restaurants, work spaces and daily cleaning services that are designed to make the guest feel welcome. In a hotel setting you can also arrange tours and visits to local places of interest that you would have to arrange yourself in a vacation rental.

    Private rentals often come with TV, free flowing wi-fi, DVD players and even games consoles meaning that there is plenty to do to keep you occupied on an evening. Cleaning will be your responsibility and any adventures that you want to do will have to be arranged by you. Less options on hand but far more flexibility.

    The post Vacation Rentals vs Hotels, Which is the Best?   appeared first on All Peers.

    KatasoftAnnouncing Stormpath’s Java SDK 1.0 Release [Technorati links]

    August 29, 2016 05:50 PM

    Big, big news, people: The Stormpath Java SDK has left release candidates behind and is now at 1.0!

    The goal for any Stormpath SDK has always been to make it super easy for developers to work with Stormpath using the latest in technologies and integrations. With the 1.0 release of our Java SDK, it’s a snap to integrate Stormpath’s Identity Management platform into your application. Little to no additional coding is required. It’s easier than ever to use popular frontend technologies (such as ReactJS and Angular) along with the Stormpath Java integrations (such as Servlet and Spring Boot).

    We’ve added a ton of new features to the Java SDK, including the servlet integration and integrations for Spring, Spring Security, and Spring Boot (see the laundry list at the end).

    One of the most important new features in this release is compliance with the Stormpath Framework Specification. Spec compliance guarantees that all our SDKs work the same way, and gives you the ability to create SPAs (Single Page Applications) using any of our client-side integrations, like Angular, with any of our Java integrations.

    With the 1.0 release, we pack lots of features into the Java SDK and integrations, provide examples and tutorials right in the github repo, and make it nearly code-less to integrate Stormpath with modern frameworks like Angular and Spring Security.

    But, don’t just take our word for it. Here’s an easy Angular SPA example.

    Angular + Spring Boot

    To demonstrate the new SPA capability, I copied the client folder from an Angular + Express + Stormpath example into a basic Spring Boot + Stormpath example. The result is a basic Angular + Spring Boot + Stormpath example application.

    There’s a single Spring controller that simply forwards back to the angular app for the auth endpoints (like /login and /register). Using simple properties configuration, we delegate responsibility for the HTML views to the angular app and responsibility for the JSON models (GET) and form submissions (POST) to the Spring Boot app.

    Here’s the entire login.html file from the Angular app in the example:

    <div class="container">
      <div class="row">
        <div class="col-xs-12 text-center">
          <h3>Login</h3>
          <hr>
        </div>
      </div>
      <div sp-login-form></div>
    </div>

    So, how do we get from that to this?

    loginviewfb

    The sp-login-form makes use of the Stormpath Angular SDK. It retrieves the login model, which is served by the Spring Boot app.

    You can see this in action using the httpie command line http client:

    http localhost:8080/login

    produces:

    HTTP/1.1 200
    Content-Type: application/json
    Date: Thu, 18 Aug 2016 02:35:40 GMT
    Transfer-Encoding: chunked
    
    {
        "accountStores": [
            {
                "href": "https://api.stormpath.com/v1/directories/12OvcZl9yQuldBGw7X0LZs",
                "name": "Demo-Facebook",
                "provider": {
                    "clientId": "794907687304823",
                    "href": "https://api.stormpath.com/v1/directories/12OvcZl9yQuldBGw7X0LZs/provider",
                    "providerId": "facebook"
                }
            }
        ],
        "form": {
            "fields": [
                {
                    "label": "Username or Email",
                    "name": "login",
                    "placeholder": "Username or Email",
                    "required": true,
                    "type": "text"
                },
                {
                    "label": "Password",
                    "name": "password",
                    "placeholder": "Password",
                    "required": true,
                    "type": "password"
                }
            ]
        }
    }

    The Angular app uses this login model to render the login view, including the Facebook button. When you submit the login form, it makes a POST to the /login endpoint, which again is handled by the Spring Boot app. Easy peasy!

    The Full Java SDK 1.0 Feature List

    In addition to SPA support across all the integrations, the following is included in the 1.0 release:

    1. Angular Example: This new example joins the other examples we have in the SDK repo in the examples folder. It demonstrates how easy it is to create an application with an Angular front end and Spring Boot backend, all integrated with Stormpath.
    2. Content Negotiation: The rules spelled out in the framework specification determine whether to return JSON or HTML responses. This makes it very easy to configure a mixed application, such as Angular on the front end and Spring Boot on the back end. This is done in configuration with no additional coding.
    3. Social Providers: Login and registration support for Google, Facebook, Linkedin and Github. Simply map the appropriate Directory type to your application and the Login View will show the correct button for the Social Provider. No additional coding is needed.
    4. SAML Providers: You can easily add external SAML providers to your application. Simply maps the SAML Directory to your application and the Login view will show a button with the Directory name. No additional coding is needed.
    5. OAuth2 client_credentials grant type: You can allocate and manage API keys for your users with Stormpath. Now, you can use those API keys to get an Access Token for use in hitting protected endpoints in your application with support for the client_credentials grant type.
    6. Single Sign-on: Support for Stormpath’s SSO service – ID Site – is now available in the Servlet integration (ID Site is already supported in the other integrations)
    7. Event Handlers: Support for Pre and Post login and register handlers makes it easy to have side effects, such as logging, when these events occur.
    8. Custom Registration Fields: Easily add additional fields to the default registration form. This is expressed in properties with no additional coding required. Non-standard fields are automatically stored as Custom Data.
    9. Profile Endpoint: Added /me endpoint to return JSON profile information for authenticated users.

    Additionally, the following dependency and code updates are included in this release:

    1. Significant Spring Security performance improvements
    2. Internationalization (i18n) support / improvements
    3. Our account cookie (the way we used to keep client-side state) has been replaced by an access_token and refresh_token cookie.
    4. All our controllers are filters now (we were previously using handlers). This allows a request to pass through to be handled by custom client code.
    5. Removed support for JDK 6
    6. Removed all code and docs for previously deprecated interfaces
    7. Upgraded all external dependencies to latest versions, including Spring Security 4.1.2 and Spring Boot 1.4.0

    Along the way, we built a Framework Test Compatibility Kit for all of Stormpath’s integration developers to use. It ensures that whether you’re using the Node.js Express integration or the PHP Laravel integration, you can expect uniform responses to your requests as defined in the Stormpath Framework Specification.

    The five primary Java integrations in the Java SDK project (Servlet, Spring WebMVC, Spring Security Spring WebMVC, Spring Boot WebMVC, and Spring Security Spring Boot WebMVC) each pass all 112 tests in the TCK.

    All the things! – Java SDK Documentation Edition

    You can get to all of the Java SDK documentation here. Or get started with Java and Stormpath in 10 minutes or less? Check out our Quickstart.

    If you want to take a deep dive into the Core Java SDK, jump into the Product Guide. We’ll take you from a basic Spring Boot application to a Stormpath integrated Spring Security Spring Boot WebMVC application, complete with fine-grained access controls in our Spring Boot Tutorial.

    The post Announcing Stormpath’s Java SDK 1.0 Release appeared first on Stormpath User Identity API.

    KatasoftWatch: Token Authentication with ASP.NET Core [Technorati links]

    August 29, 2016 11:31 AM

    Token authentication is a critical element of building scalable identity, authentication, and authorization management. The token-based approach is stateless, secure, mobile-ready, and designed to scale with the size of your user base (without additional burden on your servers).

    This Token Authentication webinar from Stormpath’s .NET Evangelist, Nate Barbettini breaks down both token verification and token generation in the new ASP.NET Core stack. He also covers:

  • Sessions vs. tokens
  • Statelessness
  • The anatomy of a JWT
  • Signature cryptography
  • Hosted user identity
  • You can view the slides that accompany this webinar on Slideshare.

    Excited to learn more about authentication and JSON web tokens? Check out these resources:

  • Token Authentication in ASP.NET Core
  • 10 Minutes to User Authentication in ASP.NET Core
  • OAuth with JSON Web Tokens in .NET
  • Where to Store Your JWTs — Cookies or HTML5 Web Storage
  • Token Authentication with Stormpath
  • The post Watch: Token Authentication with ASP.NET Core appeared first on Stormpath User Identity API.

    Matthew Gertner - AllPeers3 Tips for Making Board Portal Software Work for You [Technorati links]

    August 29, 2016 12:15 AM

    The people who sit on the board of your organization are busy individuals. Many of them come from across the country (or farther yet) for the quarterly meeting, and distributing information in a timely manner can be quite a challenge if you still rely on printing and couriering. That’s why more and more organizations are finally making the shift to paperless meetings. It makes distributing financial reports, agendas, and binders much easier, especially for organizations who draw on directors from different parts of the country.

    Better board of governance software apps are not just designed to make it easier to distribute binders digitally. They are also made with productivity in mind, which means that they make it simpler for directors to read, annotate, and collaborate on documents. Software such as that offered by Aprio also makes it easier for the board administrator as well as the chair responsible for keeping meetings on time and moving. These board portals are designed to make governance more efficient, ultimately giving directors more time to discuss important decisions.

    How can Board Portal Software work for you?

    Below are just some of the simple, streamlined features effective board portal software should offer.

    1) Single Step PDFs

    It can be incredibly frustrating how much time gets wasted converting files originally created in Word, Excel, and PowerPoint into PDFs. A good board portal makes it simple to display files as PDFs with the mere click of a button. It’s always advisable to double check the formatting quality of rich documents before sending them. It may not seem like a significant feature, but without it, you can wind up wasting a surprising amount of time converting and formatting documents.

    2) Digitally Track and Approve Expenses

    Tracking and approving director expenses used to be a nightmare. Administrators face all kinds of spreadsheets, receipts, and disorganized claims. When they use portals such as Aprio’s, directors can enter their expenses directly into an Expense library. The portal even allows them to scan and upload all of their necessary receipts and submit them for payment by email. Administrators get to track expenses as they accumulate and, if necessary, remind directors of bylaws and practices governing expenses. Early stage companies are often advised that they should only be reimbursing directors for reasonable expenses, i.e., if the executives are traveling coach, the company should only be reimbursing coach class tickets for directors, too.

    3) Using Links in Your Agenda

    It’s always a struggle to keep to meetings on time and productive, but you’re not running an effective board without the ability to stay on schedule. If you’re using portals such as the latest board management software from Aprio, it’s easy to keep meetings running productively. One of the ways everyone can save time during a meeting is by retrieving important reference documents digitally, via hyperlinks attached to the meeting agenda. One exceptional feature you should look for is the ability to attach links that are unique to the user, a useful feature when you’re dealing with in camera meetings. If you’ve been wasting time in meetings, distributing materials, tracking expenses, or even something as simple as formatting PDF files, it’s time to improve your efficiency with a board portal.

    The post 3 Tips for Making Board Portal Software Work for You appeared first on All Peers.

    August 28, 2016

    Matthew Gertner - AllPeersSurgical suction: Not a one-fits-all system [Technorati links]

    August 28, 2016 11:24 PM
    Good surgical suction is integral to a successful operationPhoto by KRISTOPHER RADDER/U.S. Navy

    Whether you have been the subject of a procedure involving surgical suction, or if you have just seen it on the television dramas, it’s fair to say that most of us think that it comes in one shape, and one form. Its importance means that the wrong selection can endanger the health of a patient. 

    As the title may have given away, this isn’t necessarily the case. When surgeons turn to suction during a procedure, they are left with several drain choices. It means that not all surgeries will involve the same type of suction, but more on that later.

    To give more of an idea of suction, and just why and how it is used, here’s a lowdown on how surgeons approach it.

    What is suction and why is it used?

    Suction is something which will be used in the vast majority of surgeries around the world, usually with aspirator pumps. The main aim is to drain fluid from an area or to decompress it. Fluid could come in the form of blood or pus, or in some cases a surgeon might simply want to prevent air accumulating in a certain area of the body. There are even times where fluid might be removed to identify possible leakages.

    In truth, the list for surgeries involving suction could be endless. It ranges from plastic surgery, to breast surgery to chest drainage.

    One of the most interesting parts about surgical drainage is that there isn’t really a set amount of “rules” for surgeons to follow. In other words, most surgeons will simply use their own preferences – there is little scientific backing which highlights how suction should be performed effectively.

    What are the different types of suction available to surgeons?

    As we’ve pointed out, suction isn’t necessarily a one-fits-all solution. There tends to be three types of surgical drain.

    The first comes in the form of the open or closed drain. In the case of the former, this involves draining fluid into something like a stoma bag or a gauze pad. A closed drain meanwhile will be within a tube, with the fluid draining into a bag. The latter tends to be more common as there is a smaller chance of infection, due to the fact that fluid isn’t in the open.

    The other type of drain available to surgeons is one which is active or passive. Active drains have constant suction, which will be either low or high pressure. On the flip side, if the surgeon opts for a passive drain, it means that there isn’t any suction and they only work courtesy of the differential pressure which is formed between the body and outside of the body.

    The final suction option is whether or not the drain is made out of rubber or silastic. In the case of rubber, these tend to exhibit more pressure on tissue and can result in a tract forming. In some cases this is a positive thing, but in others a surgeon may opt for the silastic drain which has softer reactions on tissue.

    The post Surgical suction: Not a one-fits-all system appeared first on All Peers.

    Matthew Gertner - AllPeersWhy Is Online Reputation Management Difficult? [Technorati links]

    August 28, 2016 01:05 AM
    Online reputation management is hard work that needs to be taken seriouslyPhoto by CC user 132604339@N03 on Flickr and http://joethegoatfarmer.com/

    The truth is that online reputation management is not at all difficult these days. However, most people from around the world lack the necessary knowledge to make the correct steps when they are needed. The truth is that the main reason why online reputation management is difficult is the lack of knowledge about what online reputation management actually means.

    In order to better highlight the topic, let us think about some of the misconceptions that appear and discuss different reasons that people highlight when referring to the difficulty associated with online reputation management. That should put you on the right path.

    Low Investment Budgets

    In order to properly set up a reputation management strategy on the internet you will need to be aware of what happens in real time. This can be a little difficult. You will need to invest money in various different special tools. While this is definitely something that is not available for many companies, especially the small to medium sized ones, alternatives are always available. For instance, why not set up simple Google News alerts for the keywords you are interested in? This is free and you can learn about the mentions that are of importance for you.

    Not Being Able To Respond To Negative Reviews

    If a negative review appeared so many simply ignore them. This is not a good approach for your business. The problem is not that the negative review appeared. The real reputation management problem in this case is that the review was not taken into account. You have so many different ways in which you can respond to a negative review.

    The trick is to highlight the fact that you took notice and that you take the feedback that is offered in order to improve the services/products that you are currently selling. Even if the mention is personal and really negative, by remaining positive and letting the reviewer know that you respect what was said will help you to improve your reputation.

    Getting Feedback From Customers

    In order to work on your reputation on the internet, you have to receive feedback from your past customers. Also, you need to learn from the potential customers, why they think about buying and why they might not actually make the purchase. A problem appears because reputation managers think that it is tough to get feedback. Most commonly the mention is about conducting online surveys. They can be pretty expensive.

    What you should know is that different ways to get feedback are now available. They are either really cheap or completely free. For instance, a great example of a free channel that can be used to get feedback from customers is social media. When you engage in conversations with your followers and fans you will be able to ask questions, receive answers and get all that important feedback you need.

    Never think that reputation management is not possible and that you cannot do anything about it. Just think outside of the box, come up with great ways to manage your reputation. Alternatively, seriously consider hiring professionals to get the work done for you.

    The post Why Is Online Reputation Management Difficult? appeared first on All Peers.