A lot of training is based on cases; cases from business schools, real customer cases or fictitious cases created to practice specific discussions or skills. With the simulation framework any case can be turned into a simulation experience – adding competitive elements and individual- and team feedback.
To receive the input and to present the feedback you will need some sort of interactive setup i.e. iPads, PC's or Smart phones connected to the Internet. (We can help you use existing equipment or we can deliver a turnkey setup).
Use of the simulation framework is always a customized process where we will help you translate your specific needs into the most suitable solution. Because we have an extremely flexible framework – and a lot of experience – this does not need to be very expensive.
Once the framework is setup for your need it can be easily reused.
Combine your own content with our simulation framework and turn any relevant case into a fun and engaging learning experience.
TYPICAL STAGES IN A CASEBASED SIMULATION
1) DEFINE TASK AND CAPTURE OUTCOME
Introduce the case that is used for inspiration and background (can be done as a live presentation, distributed beforehand, presented by a customer etc.)
Based on the inspiration from the case, teams or individuals must be briefed on the specific task and the format in which they have to deliver their contribution; e.g. a live presentation, a role play, a video (e.g. a 90 second "elevator pitch") and /or a text form (a written Value Proposition in a predefined format)
Participants must now work and prepare/submit their outcome within the given time frame.
A video is a great way to share the outcome of a group exercise.
The next step will typically be to share the contributions and collect peer-to-peer feedback.
For live presentations this will happen real-time (i.e. with an "observer" scoring on an iPad).
For other formats (video or text input) this can occur, as soon as the contributions are collected.
Depending on the case, participants can score each others input or customers (or other experts) can do the evaluation.
Either way feedback can be collected in many forms – common examples are:
Simple scores (performance on a given scale).
Observations or recommendations.
Points related to specific actions or types of behavior (positive or negative).
2) COLLECT FEEDBACK
A typical interface for feedback collection: Left side contains the original submission (a video and a written description). The right side is used to capture feedback (two different scores and text comments)
3) SHARE INDIVIDUAL/GROUP FEEDBACK
When feedback is collected the data is processed – and presented back to the players (individuals or teams). Again – this can be customized to the specific case and typical examples are:
Your score on this was "3,7 out of 6" or simply "42".
Same as absolute score – but including a Group Average.
You ranked as "#3 out of 178" or "#29 out of 30".
The input collected to the player.
A typical interface for a feedback presentation: Left side contains the quantitative feedback (a couple of scores compared to a group average. The right side is used to share the qualitative feedback (text input).
4) SHARE OVERALL RESULTS
Overall results can be presented in a number of different ways e.g. as a grid that shows the distribution of groups on two different dimensions.
Finally we can analyze the performance of a group of players – and give feedback based on this. Typical examples are:
SCOREBOARDS (with different views)
Input from different scoring parameters can be combined into calculated KPIs e.g Airtime, Relationship scores, Proactivity etc.
Scores on different dimensions can be analyzed in multiple dimensions.
Text inputs aggregated in Word Cloud format – visualizing the most occurring words and illustrating trends.