ProFutures Blog

The APF Profutures blog features posts by the Emerging Fellows and other APF futurists. We will be sharing intriguing futures ideas and information about professional futurists and the practice of strategic foresight.

You can more about the Emerging Fellowship program and the inaugural class on the Emerging Fellows page. Please direct your questions to Terry Collins

Your comments are welcome, so long as they are courteous, brief, and on topic. 
<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 
  • 26 Jan 2015 1:17 AM | Anonymous member (Administrator)

    Written by: Alireza Hejazi, APF Emerging Fellow

    Attending an international exhibition on a marketing mission recently, I was asked to score service and product providers there and nominate my preferred candidate of that expo. After reviewing many pavilions, I made up my top ten list and scored them according to my check list. I voted for a European company that met most of my desired factors of presenting their services in a client-friendly manner. On the final day, my nominee won the cup not just because of my vote, but due to many other votes that other evaluators had given in favor of them. What looked nice in my eyes was also fine in the eyes of others. I asked myself whether such scorings and rankings could be also made for professional futurists. The idea made me write this blog post.

    I think that ranking the futurists can be a challenging task due to a number of reasons. First of all, there is no universally agreed system of scoring for futurists. Secondly, futurists normally come from different fields of expertise and they cannot be ranked similarly. And thirdly, ranking the futurists may be done validly by institutions that might be authorized for such rankings. I would like to share some of my assumptions and questions about the feasibility of such a scoring system in this post. I should remind that the goal of ranking is not to drive low scores away, but to claim them as candidates of high rank through professional development.

    The first question that comes into mind is this: “What is the benefit of ranking?” or “Why should the futurists be ranked?” In my view, futurists can benefit easily from their own personal branding without ranking; but if they are going to be entitled to the merits of professional recognition, they should be identified by the degree of excellence they provide with their services. In other words, ranking is a means of qualification in terms of knowledge, skill and the quality of service that professional futurists provide for their clients. In my view, professional recognition and related merits are logically belonged to those who provide high quality foresight outputs. Fortunately, the APF’s Most Significant Futures Works program has been serving this idea since 2013.

    Another question that will arise concerning a ranking system is this: “Can the futurists be ranked according to their academic degrees, the number of their published or referenced works, the number of their students, the efficiency of methods and techniques they have developed or the number of their daily Tweets?” or “Should they be judged according to the values they bring to their own nations and the entire humanity?” Conventional methods of ranking may sound useful for scoring the futurists who live in societies where thinking and acting about the future is respectful, but how about futurists who live in regions where futurism is nonsense in the eyes of local decision makers who are positioned based on aristocracy, not meritocracy?

    Any conceivable scoring system for futurists should recognize the fact that futurists are various in their talents and capabilities. While many of them are competent in applying qualitative methods of research, there are some who are brilliant in using quantitative methods of inquiry. Many futurists are good communicators and some of them are skillful in communicating what is ahead in innovative ways. Most of them are open-minded and lifelong learners, but what makes them valuable for themselves and the societies they serve? What are the social impacts of futurists and how can a ranking system measure them in national and international scales?

    The first step that should be taken in this line is to provide a clear and detailed description of the knowledge, skills and attributes expected of a competent futurist or foresight practitioner. A competency framework like what is developed by the International Manipulative Physical Therapy Federation (Rushton, 2013) can be also made for professional futurists based on these components:

    (1) Dimensions: The dimensions are the major functions for foresight performance at post graduate level. The functioning of strategic foresight and futures studies graduates should be evaluated after their graduation in practice.

    (2) Competencies: The competencies are the components of each dimension stated as a performance outcome. The competencies linked to a dimension indicate the standardized requirements to enable a professional futurist to demonstrate each major function for performance at post graduate level.

    Competencies can be divided into competencies related to knowledge, skills and attributes.

    (a) Knowledge: Encompasses the theoretical and practical understanding, use of evidence, principles, and procedures.

    (b) Skills: Encompasses the cognitive, psychomotor and social skills needed to carry out pre-determined actions.

    (c) Attributes: Encompasses the personal qualities, characteristics and behavior in relation to the environment.  

    There are other concerns in the workplace that should be addressed. Research shows that ranking systems are often viewed negatively by people. However, many major corporations such as General Electric (GE), Intel, and Yahoo! use relative rankings and believe in their advantages. For example, Jack Welch, the former CEO of General Electric, instituted a forced ranking system at GE in which 20% of employees would be in the top category, 70% would be in the middle, and 10% would be at the bottom rank. Employees who were repeatedly ranked at the lowest rank would be terminated (Ryan, 2007). Corporate futurists or foresight practitioners might be ranked internally within the corporations they work, but how should they be ranked externally in a larger scale within the global community of futurists?

    Relative rankings may create a culture of performance at corporation level by making it clear that low performance is not tolerated, but how about rankings that might be made by scoring futurists at a professional level? Should a low scorer be expelled out of international futurist communities? Or should he/she be prohibited from practicing the foresight profession without receiving required certifications? More importantly, what are the potential downsides to such rankings? Should a ranking encourage the futurists to upgrade their academic education in foresight and develop their professional skills, or conversely discourage them and deprive them from professional recognition?

    There are many other questions and assumptions like what are mentioned above that make a long list. They highlight a special attention that should be paid to all the details of any effort that would be likely made towards ranking the futurists. Until the completion of a standardized ranking system, conducting self-other rating agreement surveys can be the easiest way to capture a better understanding of futurists’ standing in companies and organizations they serve.  

    References

    Rushton, A. (2013). Educational Standards in Orthopaedic Manipulative Therapy, Part A: Educational Standards. International Manipulative Physical Therapy Federation.

    Ryan, L. (2007, January 17). Coping with performance-review anxiety. Business Week Online, 6.

    About the author

    Alireza Hejazi is a PhD candidate in Organizational Leadership at Regent University and a member of APF Emerging Fellows. His works are available at: http://regent.academia.edu/AlirezaHejazi

  • 19 Jan 2015 12:17 PM | Daniel Bonin (Administrator)

    Business wants to increase the repurchase rate, achieve positive word of mouth recommendations and promote cross buying behavior. But how can futurists get there? There are various hurdles to take if one assumes that futurists are part of the service industry.


    Client satisfaction is based on the comparison between expected and actual/ perceived performance. However the client might not have an idea of what they can expect and/or might hold an unfavorable mental model about foresight. The service provider has the opportunity to leave good impressions and build trust at points of contact with clients (“moments of truth”). During these moments, emotional intelligence is sometimes even more important than purely factual knowledge. Gaps in communication are the sources of many unnecessary misunderstandings. But in particular the distinct characteristics of services can be challenging for futurists:

    5. Characteristics of services

    1. Production and consumption might take place at the same time (e.g. Workshops).

    2. Once the service is provided the client cannot have the product exchanged for another product (like you can do with physical products), but may ask you to revise your work. This characteristic constitutes a source of conflict as futurists often challenge the client’s opinions and views. Thus building trust is essential.

    3. Foresight services cannot be stored as they are normally individualized to fit the client’s needs and objectives.

    Two characteristics are particularly important:

    4. It is hard to measure or assess the quality of foresight services. Intangibility constitutes a source for misunderstandings and also makes the comparison between futurists very hard.

    5. The client plays an integral part in the production process of foresight related services. The outcome depends on both parties. But the client might not realize this. Here, again, it is essential to establish mutual trust.

    So one can say that the client or anyone who is thinking about hiring a futurist faces a high degree of uncertainty caused by the characteristics of services. Clearly, futurists need to build trust and increase the service quality to increase customer satisfaction. But in order to increase the service quality one needs to identify areas where improvements are necessary. In marketing some of the following options are used to build trust and assess the service quality.

    Build Trust

    • Make sure that you respond to client’s needs in a flexible and fast way, but be honest and clear about what is attainable and what not.
    • Create reference points: Clarify what is expected and what can be expected (e.g. provide a sample of one’s work).
    • Hand out physical objects as a gift (e.g. artifact from the future) like some service providers (e.g. t-shirts from restaurants) do to increase psychological proximity.
    • Create trust by social proof: name clients, provide testimonials and use smart wording (e.g. “Over X business executives already joined our newsletter”).
    • Show expertise: Provide detailed descriptions about the knowledge and abilities of you / your team.

    Assess Service Quality

    While, simple questionnaires could also be used to assess the service quality, more sophisticated tools may provide additional insights. The following tools and methods might be used to assess the service quality and to gain a better understanding of the service flows. By doing so one has not only the ability to improve the service quality and reduce misunderstandings but also standardize client communication and processes.

    Communication gaps: Researchers identified communication gaps that often occur and decrease the service quality. Figure 2 shows where special attention needs to be paid (Patusuraman, Zeithaml and Barry, 1985).

    Blueprinting: Blueprinting is used to structure and sketch service flows. A blueprint consists of different types of “lines” and types of “activities” as shown in figure 3. This technique can be used to identify “moments of truth” and to standardize service processes.

    SERVQUAL: Using a Likert scale, client’s expectations and perceptions are measured and compared along five dimensions: (1) Tangibles (physical facilities, equipment and employees), (2) Reliability, (3) Responsiveness, (4) Assurance (e.g. credibility and competence), (5) Empathy/ Customer Understanding. I tried to create a questionare based on the book by Zeithaml, Parasuraman and Berry (1990), which can be found here.

    Critical Incident Method: Client’s are asked to memorize and describe “critical moments” (could be either positive and negative) in order to gain insights into the causes, outcome, feelings, actions involved and resulting changes in behavior. Afterwards all occurring problems are clustered, the frequency of certain problems is assessed and the relevance/ degree of annoyance is analyzed (“Frequenz-Relevanz-Analyse für Probleme”).



    References

    Bitner, M. J., Ostrom, A. L., & Morgan, F. N. (2008). Service blueprinting: a practical technique for service innovation. California management review, 50(3), 66.

    Borth, B. O. (2004). Beschwerdezufriedenheit und Kundenloyalität im Dienstleistungsbereich: Kausalanalysen unter Berücksichtigung moderierender Effekte. Springer.

    Edvardsson, B., & Roos, I. (2001). Critical incident techniques: Towards a framework for analysing the criticality of critical incidents. International Journal of Service Industry Management, 12(3), 251-268.

    Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. the Journal of Marketing, 41-50.

    van Doorn, J. (2004). Zufriedenheitsdynamik: eine Panelanalyse bei industriellen Dienstleistungen. Duv.

    Zeithaml, V. A., Parasuraman, A., & Berry, L. L. (1990). Delivering quality service: Balancing customer perceptions and expectations. Simon and Schuster.

  • 12 Jan 2015 9:02 PM | Anonymous member (Administrator)

    Towards Disintermediation.

    Jason Swanson, APF Emerging Fellow.

    In my post last month I explored a few ideas of how big data might affect the futures field in terms of both practice and business.  This post will continue that exploration, this time focusing more on the potential implications for the business side of things as big data tools such as R and Python come to fore, as well as academic programs such Udacity’s nanodegree in Data Analysis come to market. For an excellent primer on these tools, please take a moment and read my colleague Julian Valieser’s posts found here and here.

    I would like to explore the implications of big data on the futures field using a lens of scarcity, abundance, and disintermediation.  There are quite a few examples of industries that have experienced line of development. The retail industry comes to mind with consumers once relegated to seemingly few retail outlets, then an abundance of options, and now retail is becoming increasingly disintermediated as the internet has allowed for increased opportunities for peer to peer transactions. The music industry is has followed this path, and even public education here in the United States is dealing the changes that a system and its stakeholders have to contend with as it moves from the abundance period in terms of information access into disintermediation.

    Apply this lens to the futures field, one might make the argument that thinking about the future is indeed disintermediated. Everyone person alive thinks about the future in some capacity. It is part and parcel of living. When we define the futures field as the professional practice of studying the future using a defined methodology, an argument can be made that the field is still at the scarcity stage. The number of professional futurists is tiny in comparison to other professions.  I can’t help but reflect on how many times I have given my futurist elevator speech during my brief time in this profession whenever I am asked what it is I do.

    What might it take to move the futures field from scarcity to abundance or even all the way to disintermediation? If we are to consider the examples of retail, music, and education, the key additive was technology, particularly when moving from the abundance stage to the disintermediation phase. Concerning the futures field, there are signs that the field may be beginning to move from the scarcity stage to the abundance stage. Among those signals are the small but growing numbers of academic programs offering courses and degrees on the topic of foresight, growing interest in existing foresight courses and programs, and even growing interest in the application of foresight methodology in disciplines such as design.

     If these signals might signify a slow move to the abundance stage for the foresight field, what might that look like? Let’s use the retail industry at the abundance stage as an example once more. At the stage of abundance, the consumer had a high degree of choices in retail outlets, with a high degree of choice in terms of items, and many price points for those items. The foresight field at the abundance stage may look similar to the retail industry; a high degree of choice in terms of foresight services, many practitioners, an increase in organizational or internal futurists, more choices in training programs, in short, more. Of course a critical uncertainty here is will there be demand to account for all this “more” beyond simply an interest in methods.

    As I mentioned in my last post, big data has the potential to change our practice and our industry. As big data tools continue to simplify and improve, there will be an effect on the futures field. One of those effects might be speeding futures through the abundance stage into disintermediation. As these tools develop, become easier to use and more accurate, the by product may be a growing interest in what’s next. Using predictive analytics and modeling to give a client, company, or organization accurate peaks into the future could push the field into adopting these tools (Again, please check out Julian’s blog post for an excellent use case). The move towards incorporating this type of data into our work may have the effect of “legitimizing” the work in the eyes of clients who in the past may have been standoffish, or who may shy away from more qualitative pursuits.

    As big data tools continue to develop over time, these tools have the potential to be a factor in moving the futures field into the disintermediation phase.  We can expect over time that big data tools, like nearly all forms of technology, will become cheaper and easier to use. As the ease of use increases and price points fall there is the potential for new users.  Given enough time, a user might be able to utilize these tools through apps on a phone or other personal devices. A person might be able to one day run predictive models with the same ease of sending a text message. This sort of breakthrough could be compared to having a futurist in your pocket, crunching massive sets of data and giving highly plausible scenarios back to the end user. At this point the futures field may be considered disintermediated, with users being able to directly use methods and tools in developing images of what the future may be like.

    Could big data and big data tools be a catalyst to push the futures field towards disintermediation?

  • 05 Jan 2015 7:33 AM | Julian Valkieser (Administrator)

    In my last article, I referred to the importance of Big Data as it has become more and more important for decisions in medium-term periods. Big Data is an often used buzzword – especially by large corporations and middle management levels.

    I have mentioned R programming, claiming that everyone in the area of Foresight should learn it in the near future. Now we have to add the programming language Python. For people with a lot of self-discipline I would like to recommend a Google search and a good book. For myself, I have gone the way of Coursera, a Massive Open Online Course (MOOC), which I can highly recommend.

    It is not so much about being a programmer. After all, it is not our field of interest. Rather, it's about using these programming languages to play with a large amount of data so that you can develop an understanding of the benefits. Of course, there are also tools that require no programming skills. Maybe you have heard of NeuroBayes or RapidMiner? But someone who wants to sell a car should also know how a car works.

    Especially the tool RapidMiner shows very clearly what makes this kind of tools and what Big Data is all about: The visual presentation or summary of large amounts of data. Only a good representation and summary can be a benefit from Big Data.

    Beautiful examples of where data analysis for short-term forecasts are used are as follows:

    http://edition.cnn.com/2012/07/09/tech/innovation/police-tech/

    http://www.popsci.com/science/article/2011-10/santa-cruz-experiment

    http://www.skyhookwireless.com

    http://firstmonday.org/article/view/3663/3040

    http://www.slate.com/...big_data_...guessing_that_you_re_pregnant.html

    Of course, these examples are not transferable or all reality based. But – to get back to the metaphor of the car – in terms of data analysis, we find ourselves status quo in the early days of the Ford Model T.

    There are certainly countless more of such examples. All more or less well understood and scientifically correct. Another example: Nate Silver Predicting an election.

    One thing you can say now: Forecasts based in the past are less reliable, or partially obsolete, for example, if you are emanating from seasonal recurring events, such as the flu or the purchase of heaters in winter time. If you can analyze data in terms of motives and interests (See also Computing and Intuiting futures from Sandra Geitz), then it gains a different picture. Motives and interests provide information representing "we are going to…,", situations such as "I'll buy a car if I get a raise."

    This could be transmitted at the macro level, e.g. if the Democrats are elected in 2020, they will finally put through a specific law, because we all know that they are still working on this. It is very likely that they will do it if external circumstances allow it. This is when Big Data comes into play. The Democrats re-election depends in turn on the people's interests which can be reflected, e.g. on Google queries.

    All of this relates only to medium-term time horizons and Foresight is less about making a prediction, rather likedepicting a scenario. However, a scenario could be represented more closely or exactly, as already hinted by Jason with his, “A Shrinking Cone of Plausibility” blog. Big Data could serve to draw the “so called trigger events” in this case to create scenarios based on these trigger events. For example: The next US president election, Jason used a Cone of Plausibility in a familiar example. I like this approach. But for me, Big Data is used for the representation of starting points or trigger events with which you can create scenarios in the distant future.

    Existing Scenarios are mostly based on the current day or status quo. At this point, let’s go back to the Big Data analysis where Democrats will be re-elected. Based on this forecast with a certain probability we can build a scenario that is not mirrored from today's point of view, but from the status of the so-called trigger event that a particular party is elected. Of course, this should not be the only factor for our scenario. Other trigger events could be used such as other interests and motives. What are the media interests? In what way have the most protests been expressed? Which governments were overthrown and which companies enjoy continuously high investments in the market? How have prices developed for this and that? This information be more precisely reflected in the near future with Big Data analytics. Of course, not 100% accurately – but more accurately than if not used, or only subjectively evaluated.

    The recommendation


    Try to engage in R and Python. Look at tools above with which you can analyze data and represent it visually, even without programming skills. The former and the latter tend to be the same.

    A pretty manageable article on R and Python in terms of big data is from the DC data community.

    But finally – why R and Python? R is primarily used for visual analysis of structured data sets, such as you already know from an Excel spreadsheet. Corresponding programming packages could complement R. Python is a little more powerful, albeit with the appropriate packages the functionality of both languages overlap. The scene will still argue which tool is more appropriate. Using Python for the analysis of texts are getting really exciting. Essentially, it is mostly a matter of counting words. How often is a corresponding keyword mentioned in a particular text or even more interesting, how often is it mentioned in a specific timetable in the whole web? Since most of the texts can be classified according to one author, and date etc., it is exciting here to see who mentioned what, when, where and why. And that's what makes the data analysis so exciting: text analysis. As mentioned above, interests and motives are the valuable insights as they represent a target of individuals and groups. I might tend to buy more bio in the future or try to travel without a car? Of course, most of us won’t write it down digitally. But who else is active in clubs, google-searching, mailing and shopping online? It's all about your interests!

    Have a nice easy entry case in R and Python offered by Beautiful Data Blog.


  • 29 Dec 2014 4:39 PM | Bridgette Engeler Newbury (Administrator)

    It’s that time of year. Celebrations and traditions. Endings and beginnings.  Promises and provocations. Reflections and resolutions.  And now that the tinsel, incandescent holly and Santa-shaped shortbread are on sale, the flurry of ‘top ten’ lists will appear as quickly as the hot cross buns do (across supermarkets in the UK and Australia at least).  

    As Jim Carroll says here it’s relatively easy to extrapolate current trends into a ‘Top Ten for 2015’; it’s quite a different matter to look further ahead, as he does to 2025.

    Some of those lists will posit that we’re in an era of innovation, entrepreneurship and technology to transform cities, economies and lives. Spurred on by wearables, rapid urbanisation, smart cities and rising popular demand for access to high-quality (and sometimes sustainable) infrastructure, it all leads to seemingly ‘good’ growth that is assumed to follow globally.

    So I want to highlight Mashable's list of notable innovations in 2014.

    Few of the innovations that improved the world in 2014 will make onto the top tens for greatness in 2015 or beyond, and only a couple might be considered trend-setters. Why, I wonder? Compare it to a list of tech predictions like this one - just who are the incredible innovations on this list intended for? What worldview or model of subjectivity is inscribed in the scenarios and technologies offered by the developers of such marvellous wearables and other remarkable tech wizardry? And who stands to benefit? When you compare this with the Mashable list, it’s pretty obvious that most espouse a pronounced way of thinking about the world and civil society, with rather limited implications for people, planet and participation. 

    It is one thing to reinforce the beliefs, value systems and infrastructures that underpin particular ways of life; quite another to expound the importance of technologies that privilege a few when reliable access to electricity, clean drinking water, somewhere safe to sleep or sanitary facilities are not part of everyday life for too many. I’m not denying the need for or value of innovation, invention or experimentation (that Mashable list embraces all of those) but I am questioning the way value and need are prioritised, and by whom, based on what, and the kinds of futures that are being shaped by the infrastructure, innovation and technology these choices deliver.

    As Andy Hines notes in his latest blog, maybe we could take some time to explore the ‘why’ of values, not just the ‘what’. Because there’s more to life in 2015 than networked information technology. Lasting change has to come from within, whether it’s individual, community or organisation. It won’t come from an app alone or something we plug in.

  • 22 Dec 2014 10:16 AM | Sandra Geitz (Administrator)




    Do you synthesise opinions and judgements to develop potential futures?

    Alternatively, do you conduct wide-ranging data analysis for potential futures?



    Recently I’ve been reflecting upon the various ways it is possible to source potential views about our futures. How there are multitudes of opinions and judgements that contest what are valid and plausible futures. How various sets of data are either universally relevant, hotly debated or ignored, depending on one’s interest of the specific issue studied. Is it ever possible to completely separate facts and opinion from one another?


    This led to the diagram below, which is a synthesis of Sohail Inayatullah’s Causal Layered Analysis: litany, facts, values and myth, discussed in an earlier post, and Otto Scharmer's Theory U process: downloading (judgement), open mind (analysis), open heart (connection), open will (insight).




    Judging issues increasingly involves contested opinions, ranging from expert judgements to social media flaming. Analysis may include or exclude publicly and privately available data, especially as huge volumes of big-data are generated. How we view the world, our values and deep stories, shape which data we view as valid and relevant to an issue. Similarly, others with different perspectives will connect with alternate data and opinions for this issue. Hence, the preference for a depth method like Causal layered Analysis (CLA) in contested views of our futures. And, what issues are not contested nowadays…

    Rarely, are judgements or analysis sufficient alone. Underlying assumptions, biases, or beliefs which can influence or determine either of these inputs remain hidden and unknown. Even, combining judgement and analysis, gives a similar shallow and limited future view.




    Connecting with the people, understanding their outlook and values, generates a critical view of the input data and opinions. This illuminates what parts may have been included or excluded from final result. In this way, greater depth and breadth to potential future options may be perceived, enabling one to imagine interactions and potential responses by appreciating the values of each participant.




    Developing an insight into the deep stories or myths of each participant, can provide the richest potential futures options. The effort to distil and synthesise participant’s values into succinct story headlines, appears to make them memorable. And then, quite often, after some time germinating, ruminating… combinations of these insights, and interactions form new stories, resolutions and potential futures… In this way, Causal Layered Analysis can be used as a prospective method, beyond analysis.




    What are your experiences using judgement, data, values and stories for futures?



    Does this compute or intuit with your experience?

  • 15 Dec 2014 2:06 AM | Anonymous member (Administrator)

    Written by: Alireza Hejazi, APF Emerging Fellow

    Talking to an architecture company CEO recently, I was confronted with this question: “How can corporate foresight create value in my company?” I wanted to offer a “business-as-usual” response, but I changed my mind by remembering Rohrbeck and Schwarz’s (2013) clear-cut response identifying four faces of value creation through corporate foresight. Basing my response on their view, I told my CEO friend that corporate foresight may create an enhanced capacity to perceive, interpret and respond to change, an enhanced capacity for organizational learning, and more impacts on other actors.

    In fact, the philosophy of applying corporate foresight is to reduce the uncertainty by scanning the unknown in the environment. If this is the least and perhaps the most value it can create, then employing corporate foresight is worthy enough to be considered by managers and leaders. I also suggested my CEO pal to form a multi-disciplinary team who might lower the risk of disregarding and misunderstanding the change factors. In this way, his company wouldn’t fall into the traps that might be made by personal biased assumptions about future.

    My suggestion for shaping a multi-disciplinary team originated from Gracht and Stillings’ (2013) observation maintaining that interdisciplinary cooperation not only could solve the problem of biases, but also satisfies the future needs of the target customer. In this sense, techniques like scenario planning may sound useful as far as they depict the picture of the future market and introduces new product concepts that might provide new opportunities and development routes for the market and the technology. Corporation decision makers can enrich their short-, medium- and long- term decisions significantly through alternative scenarios or by technology road-mapping.

    However, as Rohrbeck and Schwarz admit, the implementation of corporate foresight activities is still limited due to uncertainty in getting desirable outcomes and return on investment and the degree of their value creation for strategic planning. On the other hand, too much focus on current conditions and activities makes the organizations inattentive to small changes that are taking place in the wider environment but impactful in the future.

    Rohrbeck and Schwarz’s review of foresight research in the European context reveals that foresight can create value for innovation and strategic management through utilizing appropriate methods in the process of decision-making and strategic planning. Companies who practice foresight in different sectors gradually find out that foresight is a tool of value-creation. It contributes to their survival in the competitive business environment, especially in time of discontinuous change. More importantly, the application of corporate foresight methods can lead to the improvement of organizational responses and thereby improving values in innovation management. This shapes Rohrbeck and Schwarz’s (2013) paradigm that links knowledge creation to value generation.

    In my view, if the value of foresight is to influence decision, then foresight practitioners should extend their efforts beyond conventional business decision making to discover alternative methods and analyses that might enrich businesses, organizations and policy makers with new solutions. The simple world of Shell Company and its well-known six scenarios in oil crisis is evolved into a complex world of STEEPV interactions and interpersonal relations where the survival of values is tested every day. Today, value networks are drenched in intangible value exchanges that create their strategic advantage in the market.

    Corporate foresight is able to aid companies which create value by connecting clients and customers that prefer to depend on each other. These companies create and distribute tangible and intangible values through networks that are webs of dynamic relationships and exchanges between two or more individuals, groups or organizations. In my view, the success of corporate foresight in the future depends on the contributions that it would make to the development and management of these networks. For such success to happen, effective interpersonal networks must be built on a foundation of expertise, trust and shared understanding. I think that APF is exactly established to build that foundation now and in the future.

    References

    Rohrbeck, R. & J. O. Schwarz. (2013). The value contribution of strategic foresight: Insights from an empirical study on large European companies. Technological Forecasting and Social Change, 80(8), 1593-1606.

    Von der Gracht, H. A., & Stillings, C. (2013). An innovation-focused scenario process: A case from the materials producing industry. Technological Forecasting & Social Change, 80, 599-610.

    About the author

    Alireza Hejazi is a PhD candidate in Organizational Leadership at Regent University and a member of APF Emerging Fellows. His works are available at: http://regent.academia.edu/AlirezaHejazi

  • 08 Dec 2014 5:22 PM | Daniel Bonin (Administrator)

    The Theory of Inventive Problem Solving (TRIZ)

    Some weeks ago I learned about the basics of TRIZ (Theory of Inventive Problem Solving). I find the method itself and also the history of its development fascinating. The development of TRIZ started during the mid 1940s in Russia. Round about 40.000 patents were analyzed to reveal patterns, similarities, differences and laws in order to formulate methods that help to standardize the problem solving processes*. One of the inventors TRIZ, Genrich Altshuller had to endure years in the gulag after he criticized the ignorance of the leadership regarding innovation and invention (Mishra 2006). During this time, he continued to develop TRIZ and made friends with other prisoners by telling them science fiction stories he analyzed as well. The TRIZ toolkit finally made its way to Europe and the U.S. after the end of the cold war.

    The theory TRIZ assumes that typical solutions can be found for recurring problems and that psychological barriers like inertia hinder problem solving. Thus algorithmic problem solving methods and creativity techniques were developed to overcome such problems. One can say that in contrast to brainstorming or trail and error, TRIZ relies on solutions that have proven to be useful in the past. Famous methods of the TIRZ toolkit include the 40 TRIZ Principles (described later on) or the Algorithm of Inventive Problem Solving (ARIZ).

    Clearly, TRIZ aims to find solutions to technical problems and does not intend to describe possible futures. But the inventors of TRIZ believed that creativity techniques are helpful to over overcome psychological inertia and can increase the degree of inventiveness of ideas. For instance the Size-Time-Cost-Operator method assumes that material, space, time and money/costs are (a) unlimited or (b) limited/ nonexistent to find new solutions to problems (Hentschel et al. 2010, Savransky). I believe that approaches like the Size-Time-Cost-Operator could be used to imagine or invent unusual and extreme futures. And what I find particularly interesting is the idea to use some of the TRIZ creativity techniques to create a “warming up and stretching program” for workshops in order to familiarize participants with outside of the box thinking.


    Using TRIZ to facilitate creativity and encourage out of the box thinking in workshops

    Imagine you have to carry out a workshop with participants that have never thought about the future. To make the topic easily understandable, a simplified perspective might be presented. Reading a book of Savransky (2002) on TRIZ, I came across some methods and games that might be used to create such a “warm up and stretching program”.


    The Value Changing Method confronts participants with the question of what if an object (e.g. technology or societal values and norms) with an extraordinary value is rendered useless. One could then possibly use the Good Bad Game, a game that requests to find something good in a bad situation (or the other way around) to direct the focus toward positive implications and thus further facilitate creativity. The Snow Ball Method could then finally be used as a warming up activity to introduce the basics of system dynamics. Here you think about interrelationships and ask questions like: what happens to X if Y is changed and how does this affect Z.


    Other application fields of TRIZ

    Furthermore the more technical parts like the 40 TRIZ Principles might be used to simplify foresight methods. The 40 TRIZ Principles are usually applied to reduce complexity and increase effectiveness of systems. Foresight methods can be undoubtedly considered complex. The 40 TRIZ principles (e.g. “Taking out”, “Merging of Objects”,  “Periodic Action” (replace continuous action with a periodic one), Skipping”, “Cheap Short-Lived Objects”) consist of reoccurring solutions that were used in the patents analyzed to solve problems and cut through complexity**. As foresights processes are labor and time intensive small and medium sized companies might struggle to deploy the necessary resources. A simplification of foresight methods might be desirable when educating or establishing foresight processes for such clients. Bannert and Warschat (2007) used the principles to modify management methods like the scenario analysis (click here for a illustration of their simplified method and a brief overview on some TRIZ principles).


    The methods described in this blog post aim to create novel ideas by changing an existing object or its function. I am wondering if the TRIZ toolkit could be used to invent Wild Cards based on the present by using tools such as the 40 TRIZ Principles or the so called Fantogram. The Fantogram describes two dimensions: (a) the way an object is changed and (b) the methods used (see figure below; click to enlarge). The advantage of this method is that you create more creative ideas. Normally you would tend to come up with a new based on only one dimension (Zhuravleva 2005). The invention of Wild Cards will be a covered in another blog post.

    Click to enlarge

    Fantogram: Savransky (2002) and Frenklach (1998)



    *Please see Souchkov (2005) for more information on the history and development of  TRIZ

    **A complete list and precise description of all 40 TRIZ principles can be found here.


    References

    Altshuller, G. (1996). And suddenly the inventor appeared: TRIZ, the theory of inventive problem solving. Technical Innovation Center, Inc. (translated and edited by Lev Shulyak and Steven Rodman)

    Bannert, M., & Warschat, J. (2007). Vereinfachung von Managementmethoden durch TRIZ. TRIZ: Anwendung und Weiterentwicklung in nicht-technischen Bereichen (Rietsch, P., Ed.), 61-89.

    Frenklach, G. (1998). Creative Imagination Development. TRIZ Journal, October

    Hentschel, C., Gundlach, C., & Nähler, H. T. (2010). TRIZ Innovation mit System. München.

    Mishra, U. (2006). The Father of TRIZ-As we know him-A short biography of Genrich Altshuller. TRIZsite Journal.

    Savransky, S. D. (2002). Engineering of creativity: Introduction to TRIZ methodology of inventive problem solving. CRC Press.

    Souchkov, V. (2005). Accelerate innovation with TRIZ. ICG T&C.

    Zhuravleva, V. (2005). Ballad of the Stars: Stories of Science Fiction, Ultraimagination, and TRIZ. Technical Innovation Center, Inc..

  • 01 Dec 2014 4:18 PM | Anonymous member (Administrator)

    A Shrinking Cone of Plausibility?

    Jason Swanson, APF Emerging Fellow


    Photo by r2hox CC by 

    In my colleague Julian Valkieser’s latest blog post, Julian wrote about the start-up Mapegy, the programming language “R”, and Big Data analysis as they relate to creating systems models and possible applications in foresight.  It was a fascinating post and I look forward to reading more of his analysis as I am excited about the uses for Big Data in the foresight.  The potential for Big Data to be disruptive is massive. One of the potential disruptions could to the foresight field.

    With the development of “R” and start-ups like Mapegy, along with the generation and capture of more and more data, and new tools for analysis, our ability to analyze massive data sets is growing in leaps and bounds. Analysis of complex data sets combined with predictive analytics is allowing us to create increasingly accurate models and predict outcomes and behaviors. By now most people are familiar with the story of Target  using data analysis to correctly predict that one of their customers was pregnant. A more recent example could be found with HealthMap , a project of Harvard Medical School and Boston Children’s Hospital, which predicted an Ebola outbreak 9 days before the World Health Organization began reporting irregular spikes in cases.

    While neither of these are long range predictions, as we capture and analyze larger and larger data sets the ability to predict outcomes and behaviors with accuracy, at least in the near term, goes up. Even though Futurists are not in the prediction business, will being able to accurately assess the near term cancel out the need for long range thinking in multiple narratives? Furthermore, would an increasing reliance on Big Data analysis and prediction affect not only the business side of foresight, but also the the study or practice of foresight itself? Would the cone of plausibility shrink as we develop the ability to analyze larger data sets with increasing sophisticated tools? Would we see a rise in a rise in wild cards?

    While I can only speculate on these questions, there is a possible implication that as we gain the ability to use data analysis and models to predict outcomes with greater accuracy there is the potential for the cone of plausibility to shrink. The highest probability in terms of outcome or behavior might become a major piece, or the piece, in terms of a baseline future, with variability from the models in terms of outcomes or behaviors as your alternative futures, or greatly influencing alternative futures. Those probabilities could create or influence the bounds of the cone of plausibly. The greater the degree of accuracy, even in the near term, could potentially act to focus or tighten the cone, in effect shrinking the bounds of plausibility.

    As the cone of plausibility shrinks, there might also be a potential rise in wild cards, specifically Type 2 wild cards.  Introduced by Dr. Oliver Markley in his article, “A New Methodology for Anticipating STEEP Surprises” , Dr. Markley defines type 2 wilds cards as “having high probability and high impact as seen by experts if present trends continue, but low credibility for non-expert stakeholders of importance”. If the bounds of plausibility were to tighten, even some alternative futures which in the past might have been considered plausible alternate futures might fall out of the bounds of plausibility. By falling out of the bounds of plausibility, those same alternative futures have the potential to fall out of creditably for non-expert stakeholders of importance and as a result could be classified as type 2 wild cards if the impact were thought to be enough. In the event that the potential impact is possibly too low to be considered a wild card, a new term may be needed for the alternative futures that do not fit inside of the bounds of the predictive models.

    It will be interesting to see the effect that Big Data will have on the foresight field. Will clients shy away from long term thinking in favor of near or short term predication? Will increasingly accurate models add to or possibly alter our foresight toolboxes? How is the futures community currently utilizing big data and predictive analytics?

    --------------------------------------------------------------------------


    http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

    http://www.uschamberfoundation.org/blog/post/can-big-data-predict-future/41983

    Markley, O. (2010). A new methodology for anticipating STEEP surprises. Technological Forecasting & Social Change, 78(6), 19-19. Retrieved December 1, 2014, from http://www.imaginalvisioning.com/wp-content/uploads/2010/08/Anticipating_STEEP_Surprises-TFSC2.pdf



    Jason Swanson, APF Emerging Fellow

    Photo by r2hox CC by

    In my colleague Julian Valkieser’s latest blog post, Julian wrote about the start-up Mapegy, the programing language “R”, and Big Data analysis as they relate to creating systems models and possible applications in foresight.  It was a fascinating post and I look forward to reading more of his analysis as I am excited about the uses for Big Data in the foresight.  The potential for Big Data to be disruptive is massive. One of the potential disruptions could to the foresight field.

    With the development of “R” and startups like Mapegy, along with the generation and capture of more and more data, and new tools for analysis, our ability to analyze massive data sets is growing in leaps and bounds. Analysis of complex data sets combined with predictive analytics is allowing us to create increasingly accurate models and predict outcomes and behaviors. By now most people are familiar with the story of Target using data analysis to correctly predict that one of their customers was pregnant. A more recent example could be found with HealthMap , a project of Harvard Medical School and Boston Children’s Hospital, which predicted an Ebola outbreak 9 days before the World Health Organization began reporting irregular spikes in cases.

    While neither of these are long range predications, as we capture and analyze larger and larger data sets, the ability to predict outcomes and behaviors with accuracy, at least in the near term goes up. Even though Futurists are not in the prediction business, will being able to accurately assess the near term cancel out the need for long range thinking in multiple narratives? Furthermore, would an increasing reliance on Big Data analysis and prediction affect not only the business side of foresight, but also the the study or practice of foresight itself? Would the cone of plausibility shrink as we develop the ability to analyze larger data sets with increasing sophisticated tools? Would we see a rise in a rise in wild cards?

    While I can only speculate on these questions, there is a possible implication that as we gain the ability to use data analysis and models to predict outcomes with greater accuracy there is the potential for the cone of plausibility to shrink. The highest probability in terms of outcome or behavior might become a major piece, or the piece, in terms of a baseline future, with variability from the models in terms of outcomes or behaviors as your alterative futures, or greatly influencing alternative futures. Those probabilities could create or influence the bounds of the cone of plausibly. The greater the degree of accuracy, even in the near term, could potentially act to focus or tighten the cone, in effect shrinking the bounds of plausibility.

    As the cone of plausibility shrinks, there might also be a potential rise in wild cards, specifically Type 2 wild cards.  Introduced by Dr. Olivier Markley in his article, “A New Methodology for Anticipating STEEP Surprises, Dr. Markley defines type 2 wilds cards as “having high probability and high impact as seen by experts if present trends continue, but low credibility for non-expert stakeholders of importance”. If the bounds of plausibility were to tighten, even some alternative futures which in the past might have been considered plausible alternate futures might fall out of the bounds of plausibility. By falling out of the bounds of plausibility, those same alternative futures have the potential to fall out of creditably for non-expert stakeholders of importance and as a result could be classified as type 2 wild cards if the impact were thought to be enough. In the event that the potential impact is possibly too low to be considered a wild card, a new term may be needed for the alternative futures that do not fit inside of the bounds of the predictive models.

    It will be interesting to see the effect that Big Data will have on the foresight field. Will clients shy away from long term thinking in favor of near or short term predication? Will increasingly accurate models add to or possibly alter our foresight toolboxes? How is the futures community currently utilizing big data and predictive analytics?

    Markley, O. (2010). A new methodology for anticipating STEEP surprises. Technological Forecasting & Social Change, 78(6), 19-19. Retrieved December 1, 2014, from http://www.imaginalvisioning.com/wp-content/uploads/2010/08/Anticipating_STEEP_Surprises-TFSC2.pdf

    Jason Swanson, APF Emerging Fellow

    Photo by r2hox CC by

    In my colleague Julian Valkieser’s latest blog post, Julian wrote about the start-up Mapegy, the programing language “R”, and Big Data analysis as they relate to creating systems models and possible applications in foresight.  It was a fascinating post and I look forward to reading more of his analysis as I am excited about the uses for Big Data in the foresight.  The potential for Big Data to be disruptive is massive. One of the potential disruptions could to the foresight field.

    With the development of “R” and startups like Mapegy, along with the generation and capture of more and more data, and new tools for analysis, our ability to analyze massive data sets is growing in leaps and bounds. Analysis of complex data sets combined with predictive analytics is allowing us to create increasingly accurate models and predict outcomes and behaviors. By now most people are familiar with the story of Target using data analysis to correctly predict that one of their customers was pregnant. A more recent example could be found with HealthMap , a project of Harvard Medical School and Boston Children’s Hospital, which predicted an Ebola outbreak 9 days before the World Health Organization began reporting irregular spikes in cases.

    While neither of these are long range predications, as we capture and analyze larger and larger data sets, the ability to predict outcomes and behaviors with accuracy, at least in the near term goes up. Even though Futurists are not in the prediction business, will being able to accurately assess the near term cancel out the need for long range thinking in multiple narratives? Furthermore, would an increasing reliance on Big Data analysis and prediction affect not only the business side of foresight, but also the the study or practice of foresight itself? Would the cone of plausibility shrink as we develop the ability to analyze larger data sets with increasing sophisticated tools? Would we see a rise in a rise in wild cards?

    While I can only speculate on these questions, there is a possible implication that as we gain the ability to use data analysis and models to predict outcomes with greater accuracy there is the potential for the cone of plausibility to shrink. The highest probability in terms of outcome or behavior might become a major piece, or the piece, in terms of a baseline future, with variability from the models in terms of outcomes or behaviors as your alterative futures, or greatly influencing alternative futures. Those probabilities could create or influence the bounds of the cone of plausibly. The greater the degree of accuracy, even in the near term, could potentially act to focus or tighten the cone, in effect shrinking the bounds of plausibility.

    As the cone of plausibility shrinks, there might also be a potential rise in wild cards, specifically Type 2 wild cards.  Introduced by Dr. Olivier Markley in his article, “A New Methodology for Anticipating STEEP Surprises, Dr. Markley defines type 2 wilds cards as “having high probability and high impact as seen by experts if present trends continue, but low credibility for non-expert stakeholders of importance”. If the bounds of plausibility were to tighten, even some alternative futures which in the past might have been considered plausible alternate futures might fall out of the bounds of plausibility. By falling out of the bounds of plausibility, those same alternative futures have the potential to fall out of creditably for non-expert stakeholders of importance and as a result could be classified as type 2 wild cards if the impact were thought to be enough. In the event that the potential impact is possibly too low to be considered a wild card, a new term may be needed for the alternative futures that do not fit inside of the bounds of the predictive models.

    It will be interesting to see the effect that Big Data will have on the foresight field. Will clients shy away from long term thinking in favor of near or short term predication? Will increasingly accurate models add to or possibly alter our foresight toolboxes? How is the futures community currently utilizing big data and predictive analytics?

    Markley, O. (2010). A new methodology for anticipating STEEP surprises. Technological Forecasting & Social Change, 78(6), 19-19. Retrieved December 1, 2014, from http://www.imaginalvisioning.com/wp-content/uploads/2010/08/Anticipating_STEEP_Surprises-TFSC2.pdf

    http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

    http://www.uschamberfoundation.org/blog/post/can-big-data-predict-future/41983

    Markley, O. (2010). A new methodology for anticipating STEEP surprises. Technological Forecasting & Social Change, 78(6), 19-19. Retrieved December 1, 2014, from http://www.imaginalvisioning.com/wp-content/uploads/2010/08/Anticipating_STEEP_Surprises-TFSC2.pdf

  • 24 Nov 2014 3:42 AM | Julian Valkieser (Administrator)

    Of course, the topic "Big Data" was already mentioned a few times in the Profuturist blog. Of course, we all know what it involves and consists of. We now move to a higher and higher activity on the Internet. We produce data – massive data. Worldwide, already 3 billion people are online. We spend much of our time online. The amount of data that is created, rise to a stunning 107,958 petabytes per month by 2018. For example, these are over 100 mio. hard drives with a capacity of 1 Terabyte – a drive with capacity the most of us would never use.

    Companies like Google act and work with this data. Of course, they are not focused solely on this one business model. So Google is spreading in different directions. But a focus can be seen. Google is also spreading more and more offline. Why?

    The data created online, are relatively negligible in comparison to the data you can still receive from the physical world. Behavior patterns online are certainly interesting, e.g. for the field of e-commerce – but behavior and properties offline are much more interesting. The greatest benefit would be to analyze all information that can be obtained and secondly to be able to deduce something. Exciting!

    Here I want to present an example specifically for research-intensive areas. The start-up "Mapegy" from Berlin in Germany.

    Mapegy is the compass for the high-tech world, referring to their own definition. One possible application would be the following. Let’s imagine.

    I am interested in a specific topic and I would like to evaluate. Now Big Data comes into the game. Let’s take the example of a patent analysis. With tools like Mapegy I could figure out easily, who is an important stakeholder of a particular technology development, as he is related to another and what influence he has. A method of representation is about maps. Stakeholders and technological developments are illustrated via a kind of map. The larger the island, the more stakeholders gather around a particular development. The higher the mountain, the more patents were applied by a stakeholder. The closer the islands are arranged to each other, the stronger is the reference to one another. With this kind of Visual Analytic it is quite easy to illustrate how a certain subject area is connected to others.

    And that is the sticking point. A lot of data is already available. But finally the correct processing and representation make this data useful.

    At this point I want to mention "R". 

    "R is a free software programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls and surveys of data miners are showing R's popularity has increased substantially in recent years." (Wikipedia)

    Someone who can program in "R" is well paid. Even at the upper end of the scale. And not for no reason. To be able to understand a context and deduce recommendations for action, not only in the economy, but also in science and research, such as in biotechnology and of course the pharmacy, is a higher aim in business and decision processes.

    If you already understand some small connections, you can use it to create a network and may even explain the behavior of systems. In this specific example, it would be human behavior. Of course, the influencing factors are still too complex to be able to make reliable predictions from available data collections. But the more powerful computational resources, the closer is the opportunity to analyze all factors.

    Mapegy is an example of visualizing relationships and influencing factors via big data analysis. For example, the cost of genetic testing is an indicator of how quickly data analysis will change in the next years. The costs decreased in recent years more as the price of computer chips in relation to Moore's Law. In my next article I go further to the development in big data analysis with "R".


<< First  < Prev   1   2   3   4   5   ...   Next >  Last >> 

Copyright 2014 Association of Professional Futurists

Powered by Wild Apricot Membership Software