a happy cat wearing glasses thinking new ideas, highly expressive oil painting (created with Dalle2)
Integrating OpenAI into your SaaS platform can bring a wealth of benefits for your customers. Imagine automating repetitive tasks, improving decision-making, and even forecasting talent needs. In this article, inspired by a discussion with a Polish HR-Platform SaaS vendor, we’ll dive into how OpenAI can be leveraged by software development companies, startups, scaleups, SaaS vendors to revolutionize the HR operations.
First, let’s remind ourselves what are the most common HR Systems Functionality:
Recruitment and Employee Screening
Company Benefits and Compensation
Rewards
Performance Evaluation
Employee Relations
Employee Records
Learning and Development
Career Planning/Succession Planning
Competency management
So, let’s get into the ideas of what OpenAI can do for your HR Platform. I am trying to test the ideas with ChatGPT wherever possible, and stress it a little bit, to see if it can really bring good results.
11 Ideas for OpenAI + HR SaaS platforms
1.Resume screening: OpenAI can be trained on a large dataset of resumes and job descriptions to automatically screen and identify resumes that are the best fit for a particular job opening.
Example: I ask from ChatGPT to read my Linkedin resume and identify the roles I would be a better fit for.
Here is the prompt I give:
And then, ChatGPT gives me an ok-ish reply, mentioning more or less the information I have put in my resume, but in a list. Not impressed but still valid info.
Here is what I get:
Now, let’s give it something more difficult as a test.
I am NOT a developer and I would be a terrible fit for a Chief Developer position. However, since I do have a technical background and I have been working with devs for many years, I thought I would be sneaky and try to fool ChatGPT, into believing that I am a fantastic fit for a dev position.
And the results are….
So, I couldn’t fool ChatGPT and OpenAI. Indeed, it correctly pointed out that I give no evidence of real hands-on experience in developing software. Well done ChatGPT, thnx for destroying my developer career…
2.Interview assistance: OpenAI can assist in the interview process by generating interview questions based on a job description, or by analyzing the candidate’s responses and providing feedback to the interviewer.
Example: Give me 10 interview questions for someone applying for a Chief Information Security Officer Position in a Bank.
I like the questions and I also like the fact that it created a few ones that are domain-related (i.e the one with the regulatory requirements for PCI-DSS). I would happily use these questions as inspiration in my next interview process, to help me create better and more targeted questions to candidates.
3.Candidate matching: OpenAI can be trained on a dataset of successful hires and their job descriptions to predict which candidates are the best match for a particular job opening.
Example: So, as said earlier, I am not a dev. But my brother is. So, I am getting both our resumes from Linkedin and give them to ChatGPT. I then ask from it to analyze both resumes, and let me know which one of the two candidates is a better fit for a developer position.
And the results below are again correct. ChatGPT found out that my brother is the correct candidate and also justifies this. It also explains why I am NOT a good fit, despite having experience related to developers. Good job again ChatGPT; at least the job stays within the family..
4.Chatbot for candidate support: OpenAI can be used to build a chatbot that can answer common questions from candidates, such as information about the company culture, benefits, etc.
Example: I ask ChatGPT for a list of the main benefits for employees at Microsoft
I then follow up with a question regarding what is ESPP and Matching 401k contributions. And here are the results…
Now, you could go even further and try to create a full-fledged functionality HR Chatbot, which ideally could have the following as functionality:
Employee Self-Service: Employees should be able to access information about their benefits, pay stubs, vacation time, and other HR-related information through the chatbot.
Onboarding: New employees should be able to use the chatbot to complete onboarding tasks, such as filling out paperwork and completing compliance training.
Time off and Leave Management: Employees should be able to request time off and check the status of their leave requests through the chatbot.
Benefits Enrollment: Employees should be able to view their benefits options and enroll in coverage through the chatbot.
Employee Directory: Employees should be able to search for contact information for other employees in the organization through the chatbot.
Employee Feedback: Employees should be able to provide feedback and suggestions to HR through the chatbot, with the option to make suggestions anonymously.
HR Policies and Procedures: Employees should be able to access HR policies and procedures through the chatbot, such as company code of conduct, diversity and inclusion policies and emergency procedures.
Employee Recognition: Employees should be able to access information and submit nominations for employee recognition programs through the chatbot.
Chatbot should be able to understand natural language and answer questions related to employee benefits, policies and procedures, and other HR-related topics.
Chatbot should be able to integrate with other HR systems to provide accurate and up-to-date information, and also should be able to route complex queries to the appropriate HR representative.
5.PR and Internal Communication Messages: OpenAI can help you to provide your customers with custom-made AI-generated messages, in the tone you prefer.
Example: I ask from ChatGPT to create the message that a Bank CEO would send to all employees, announcing layoffs to the organization. Certainly the toughest message to send in an organization.
Here is what ChatGPT got:
6. Business English: OpenAI can help with writing sentences related to HR procedures, in a good business english language.
Example: Write 5 different sentences to inform a candidate on withdrawal via email
7. Communication with Candidates: Writing emails to communicate a specific message to candidates.
Example: Rewriting in proper business english a complete email message about informing a candidate on a non-selection.
8. Translation
Example: Please translate the previous message in Polish
9. Create AI-generated Job Descriptions: OpenAI can create job descriptions for your HR people, that can be posted on Linkedin.
Example: I ask from ChatGPT to create a job description for the Chief Information Security Officer position in a bank.
Here are the results, pretty solid to become a full job ad after a few edits from the hiring manager:
10.Interview scheduling: OpenAI can be used to automate scheduling interviews with pre-selected candidates, by using natural language processing to understand the availability of both the candidate and the interviewer.
There is no built-in outlook/calendar integration provided and you have to build it. You can then use the NLP to find the availability from both sides (HR and candidate)
One could argue that there are easier ways to find common calendar availability though, that might work faster than the NLP one. Not sure on what works faster, need to see and try a real NLP solution on that, and haven’t seen anything similar yet in the market.
11. Competency Based Management Support
A few HR systems, provide support for a competency-based approach for the organization, and in that case a competency dictionary is necessary.
In the below example, I ask from ChatGPT to write a behavior-based competency description for “Leadership”.
And then, I ask for a questionnaire that can be used for self-assessing the leadership competency level. I get open-ended questions but then I am trying to get them into a multiple-choice type of questionnaire.
And the multiple-choice results:
I hope these ideas provide a spark of inspiration to start looking into OpenAI/ChatGPT and how to integrate it in your SaaS HR Platform.
OpenAI and ChatGPT are the new cool kids on the block, so let’s have a look how startups and software development companies can leverage them to create cutting-edge applications. OpenAI is already available as a managed service on Azure and ChatGPT is coming soon.
In this post we’ll have a look at:
What is OpenAI
Dall-E
GPT-3
ChatGPT
The 4 GPT-3 language models that startups can use
The Codex models
Potential Use Cases for Startups (overall and per language model)
What apps are startups already developing with OpenAI
Open AI on Azure: How to get access to it, integrations, advantages and pricing models + cost considerations
Open AI competition
Limitations of OpenAI
Resources and further information
(Note: If you are a startup, you can get for free $2.500 USD credits to OpenAI, through applying to the Microsoft Foundershub program).
Introduction to OpenAI, DallE, GPT3 and ChatGPT: Capabilities and Potential use cases For new Apps
OpenAI is a research company that was founded in 2015, in San Francisco, by Elon Musk, Sam Altman (former president of YCombinator), Peter Thiel and others, with the goal of developing and promoting friendly AI in order to benefit humanity as a whole. The initial funding was $1billion USD and Microsoft invested an additional $1B USD in 2019.
OpenAI has released 3 AI solutions that have become super popular in the past 12-24 months: GPT-3, Dall-E and ChatGPT.
GPT-3
GPT-3 (Generative Pre-training Transformer 3), is a language model trained on trillions of words on the internet. It can generate human-like text, translate text, summarize text and answer questions. It can write poems, science fiction, and create chatbots and virtual assistants that can hold natural conversations with humans.
GPT-3 in a dialog can do empathetic math, keep the context and “remember” and copy your style of communication. Here are three examples, from startup Replika presentation (link) that explain these three capabilities with 3 neat examples.
Dall-E
Dall-E is a deep learning model, that can generate digital images from a text. It can also create “similar” images to an image you give it, or it can “continue and extend” your images.
In the below example, I asked from Dall-E to create the interior design for a small living room. It didn’t get it 100% correct but it wasn’t too bad for a start.
One of the following 4 paintings belongs to my 6-years young daughter. I gave it as input to Dall-E and it created the other three paintings, based on my daughter’s style. My daughter couldn’t actually identify which one was hers, when I asked her after a couple of weeks have passed.
Which one is the original? And which one is Dall-E made?
And getting more creative, I asked from Dall-e to act as if it was the famous painter Basquiat, and paint the below for me. I could see this as a painting hanging in a room..
Tip for Music Lovers: I am more into creating music and there is an open source AI project for auto-creating loops from text prompts: Try it at https://www.riffusion.com – I’m very much looking forward to the era when AI will be helping us writing Cubase songs with just text prompts such as “Write a drums midi rhythm in Cubase, similar to the intro from Paradise city, and add a bass line in A minor, with the funky sound of bass of RHCP“
But enough with the artistic endeavors; let’s get back to business. Let’s have a look at ChatGPT.
ChatGPT
ChatGPT is the conversational version of GPT-3 that can be used to create chatbots, voice assistants, and other conversational AI applications. It can have as an output text, or even code (e.g. a python script).
If you have played with ChatGPT, just skip this section, otherwise here are a few examples of what it can do:
I ask Chat GPT to create a one-day travel itinerary
Can ChatGPT create a poem dedicated to all of us musicians trapped in the body of an electrical engineer?
That’s at last a song I can relate to!
Then I ask from ChatGPT to write the code for a simple education game, I can give to my daughter to help her learn multiplication tables. Here is what I got as a code:
I tried the code in Repl.it to see if it actually works, and indeed it works as expected:
Obviously, you should not blindly trust the code produced; since I am not a developer, I asked my brother (who is a ninja-guru dev) to make a few tests of the produced code on text I would write (he would compile it just by seeing it..pure magic to my eyes), and he agreed that it’s in a pretty good shape for some general purpose code (obviously far from substituting the work of devs but pretty good to save them a few hours per week).
Going back to the capabilities of ChatGPT I asked from it to create a VC pitch for a startup idea I gave to it as input: A drone delivery service for sailboats outside of Mykonos island in Greece.
Well..it even got a nice name for my startup “Saildrone”..here is the pitch
ChatGPT remembers the context of the previous questions and builds on that knowledge. Here is a follow-up questions and its answer:
It can get creative with providing ideas for ads on social media and the internet.
And obviously, translation in French is an easy task:
I also asked from it to suggest a TV ad scenario and here is what I got:
I also try to find the intersection of ChatGPT with Dalle, so I ask from ChatGPT to create a text prompt for an ad I can use with Dall-e.
And here is what I get from Dall-e for the above prompt.
Not there yet but I am sure I can get a better result if I try a few more times.
One of the main advantages of GPT-3 and ChatGPT is that they can be fine-tuned for specific tasks and use cases. For example, a chatbot built using ChatGPT can be fine-tuned to understand and respond to specific domain-specific phrases. This allows developers to create chatbots that are tailored to their specific business or industry, providing a more personalized experience for users.
So in the above example, if you are the “Saildrone” startup, you could easily create a chatbot that takes orders or replies to customer questions, that is more targeted to the sailing tribe (e.g the specific phrases that are used to describe ports, docks, and other areas of potential delivery).
Another advantage of OpenAI and ChatGPT is that they can be used to create applications with minimal development time. Because GPT-3 and ChatGPT are pre-trained on large datasets, developers can use them as a starting point for their own applications, rather than having to train their own models from scratch. This saves a significant amount of time and resources and allows developers to focus on creating the user interface and other aspects of their applications.
So from all the above examples and discussions, we can think of a few potential use cases for OpenAI and ChatGPT (and here are more of them:https://openai.com/blog/gpt-3-apps/)
To create chatbots for providing customer service for an e-commerce website
For helping e-banking websites customers understand in simple terms the different terms of accounts/finance jargon etc, or navigate the website
To answer questions of citizens in gov portals on how to do e-citizen tasks
To summarize medical reports
To summarize news
To create articles, essays, blogposts, content
To scan profiles of employees and provide feedback to HR
To translate in friendly or business style, the content of a website
to write replies to emails
to write simple code that can help us build an application
to do financial analysis and forecasting
do sentiment analysis to the call center calls and find the VIP customers who are unhappy with your service
Generate educational content, games, quizes, stories for children
And many more…
The Four Language Models of GPT-3
Open AI GPT-3 is providing four different language models, called Ada, Babbage, Curie, and Davinci.
How are they different and what type of different apps can you develop?
Ada
Ada is the smallest and most cost-effective of the four models, designed for simpler language tasks such as sentiment analysis and intent classification. Ada can understand and respond to text input in a basic way, but it does not have the fine-tuning capabilities that the other models have. The pricing for Ada is per API call, making it a good option for startups who need a simple and cost-effective solution.
Use for: Parsing text, simple classification, address correction, keywords
Babbage
Babbage is the next step up from Ada, and it is designed for more complex language tasks such as language translation and text summarization.
Use for: Moderate classification, semantic search classification
Curie
Curie is the next step up from Babbage, and it is designed for even more complex language tasks such as question answering and conversation generation.
Use for: Language translation, complex classification, text sentiment, summarization
Davinci
Davinci is the most powerful of the four models and designed for the most complex natural language tasks such as creative writing, poetry, and fiction generation. It is also the most expensive of the four models.
Use for: Complex intent, cause and effect, summarization for audience
Here are some example of how to integrate each one of them in your solution:
A startup that specializes in language learning could use Ada to create a chatbot that can hold basic conversations with users in different languages, providing a simple and cost-effective solution. Babbage could be used to create a chatbot that can translate phrases and idioms for users in different languages, providing a more advanced solution. Curie could be used to create a chatbot that can answer questions about grammar and other language-specific topics. Davinci could be used to create a chatbot that can generate creative writing such as poetry.
If you ask about what is the pricing for each one of them, here is the table with the pricing of the different OpenAI language models on Azure.
Pricing of Ada, Babbage, Curie and Davinci
The Codex Models
Codex is a fine-tuned version of the fully-trained GPT-3 model. These are models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub (159GB of public data repositories were used to train the model)
They’re most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
Similar to GPT-3, the Davinci model is the strongest one in analyzing complicated tasks and the Cushman model is the fast one in code generation tasks (and it is cheaper than Davinci).
If you are into reading research papers, this is the one for Codex: https://arxiv.org/abs/2107.03374 and PDF Download of the paper here. The paper says that Codex has solved close to 30% of all the problems that the scientists gave them, according to the HumanEval evaluation set they created.
Github copilot is the most famous integration for the moment.
Pygma aims to turn Figma designs into high-quality code.
Replit leverages Codex to describe what a selection of code is doing in simple language so everyone can get quality explanation and learning tools. Users can highlight selections of code and click “Explain Code” to use Codex to understand its functionality.
Machinet helps professional Java developers write quality code by using Codex to generate intelligent unit test templates.
I think, that what matters the most here to startups, is to save time from their dev team. The dev cost per hour ranges from 20 USD to even 200 USD, so every hour saved is meaningful. And Codex & Github Copilot seem to be delivering on that frontier.
OpenAI, ChatGPT and Azure: Integration, How to get Access, Advantages, Cost
Integration
Both OpenAI and ChatGPT are integrated with Azure using the OpenAI GPT-3 API. The API can be accessed via Python, Java, C#, JavaScript etc, using Azure’s Cognitive Services, which is a set of pre-built APIs for natural language processing, computer vision, and other AI-related tasks. Here is also a chatGPT for VS Code https://gpt3demo.com/apps/chatgpt-for-vscode
How to Get Access To Azure OpenAI
If you want to try OpenAI and ChatGPT on Azure, just create an Azure account and sign-up for the OpenAI GPT-E API. If you are a startup, you can get for free $2.500 USD credits to OpenAI, through applying to the Foundershub program.
If you are not a startup, then you have to apply for access, here
One of the main benefits of using OpenAI and ChatGPT on Azure is scalability. Because the GPT-3 model is hosted on Azure’s cloud, developers can easily scale their applications to handle a large number of requests without having to worry about managing their own infrastructure. Additionally, using Azure’s Cognitive Services allows developers to add more functionality to their applications without having to build it from scratch.
Azure Cognitive Services is a collection of pre-built APIs for natural language processing, computer vision, and other AI-related tasks. These APIs can be used to add additional functionality to applications built with OpenAI and ChatGPT, such as natural language understanding, sentiment analysis, and image recognition.
Some of the functionality of Azure Cognitive Services that developers can use include:
Language Understanding (LUIS): This API allows developers to add natural language understanding to their applications. It can be used to identify and extract entities, intents, and other information from text input.
Text Analytics: This API allows developers to extract insights from unstructured text data, such as sentiment analysis, key phrase extraction, and language detection.
Speech to Text and Text to Speech: These APIs allow developers to add speech recognition and text-to-speech capabilities to their applications.
Computer Vision: This API allows developers to analyze images and videos to extract information such as objects, faces, and text. It can be used for tasks such as image recognition, object detection, and OCR.
Personalizer: This API allows developers to personalize the user experience by providing recommendations and content tailored to individual users.
Anomaly Detector: This API allows developers to detect anomalies in time series data, such as network traffic or sales data.
Translator: This API allows developers to add language translation functionality to their applications, supporting more than 60 languages.
News Search: This API provides access to a search index of news articles, allowing developers to search for news by keyword, category, or location.
So, in simple terms, by using all these APIs, your developers don’t have to build everything from scratch and can focus on the UI, Security and other aspects of their apps. You are also getting all the benefits of a managed service (scalability, elasticity, performance, security etc)
Pricing of the Azure OpenAI service
Pricing of the OpenAI on Azure is on a pay-as-you-go model, so developers only pay for the requests they make to the API. How much does it cost? You can check the pricing details here and at the table below.
Since the API pricing is based on tokens, it’s important to understand them.
A token is a bundle of a few letters. E.g the word hamburger is 3 tokens (ham – bur – ger). The complete Shakespeare literature is 900.000 words, which is around 1.2 million tokens. So, consider that 750 words worth 1000 tokens, or a token to word ratio of 1.4.
Another pricing consideration is that, in the API Call, you get charged both for the prompt you are sending and for the predicted text.
So, if you are using the best and most expensive model of OpenAI, the “DaVinci” model, it is 2 cents for every 1000 tokens (or 750 words produced via the API calls). This can easily add up to large numbers if you are building apps that will be heavily used in terms of text by a large customer base. Make sure you take this into account when building your apps. There are startups that had to move away from OpenAI on a later stage due to costs (e.g see the Replika presentation on that here)
Below a couple of examples on how to calculate the total cost of using the OpenAI service on Azure; basically you need to factor:
Which language models you are going to use (DaVince?Curie?Both?)
The Tokens you will need per day
The hours you are going to deploy the model
The Fine-tuning cost
Let’s look into a simple example of a pricing calculation for a new solution.
Let’s suppose that you want to create an application that connects to the website of a popular e-commerce store customer feedback chatbot and to the call-center to do voice-to-text of customer feedback calls. It then analyzes the feedback and tries to classify complaints, do sentiment analysis.
Let’s assume that you have 60 calls per hour from 8:00 in the morning to 20:00, 12 hours per day, for 6 working days. And that you also have 20 customers per hour providing written feedback via the chatbot in the website, for the same 12 hours per day. Let’s assume 250 words text from each customer for both channels (website chatbot and call center)
So:
80 touchpoints per hour, with 250 words per touchpoint, 12 hours per day
This is a total of 80x12x250=240.000 words per day
1 word is 1.4 token, so we need 240.000×1.4=336.000 tokens per day
Using the Davinci model, costs 0.02 USD per 1000 tokens, this is 6.72 USD per day, or around 161USD for 24 days of the month (running the service from Monday to Saturday).
Now that doesn’t seem much, but the calls per hour might be 10 to 100 times higher and the same could go for the feedback gathered from the website, depending on the popularity of the store and the market it operates in.
Finally, bear in mind that for the moment the OpenAI service on Azure is availble in the South Central US region, which you may want to test for latency-related issues in your application.
Use Cases and Examples of Startups that are currently using OpenAI
Here are a few examples of use cases and startups using Open AI, for your inspiration.
Carmax: Used car retailer CarMax has used Azure OpenAI Service to help summarize 100,000 customer reviews into short descriptions that surface key takeaways for each make, model and year of vehicle in its inventory. Read about the case study here.
In the healthcare industry, GPT-3 is being used to generate medical reports and summaries. A startup called Medopad uses GPT-3 to generate medical reports from patient data, allowing doctors to spend more time with patients and less time on paperwork. Another company called Freenome uses GPT-3 to analyze genetic data and identify potential health risks, helping doctors make more informed decisions about patient care.
In the finance industry, GPT-3 is being used for financial analysis and forecasting. A startup called Blue River uses GPT-3 to analyze financial news and generate reports on market trends, helping traders make more informed decisions. Another company, called Alpaca uses GPT-3 to analyze financial data and generate trading strategies, helping traders identify profitable opportunities in the market.
In the e-commerce industry, GPT-3 is being used to generate product descriptions, improve search results and create chatbots that can answer customer questions. A company called Verbling uses GPT-3 to generate product descriptions for their language learning platform and another company called, Sift Science uses GPT-3 to improve search results on their e-commerce platform.
Descript is a powerful video editor reshaping the way creators engage with content by using AI to make video editing as simple as editing a text document.
Harvey is developing an intuitive interface for all legal workflows through powerful generative language models. Its technology expands a lawyer’s capabilities by leveraging AI to make tedious tasks such as research, drafting, analysis, and communication easier and more efficient. This saves lawyers time, ultimately allowing them to deliver a higher quality service to more clients.
Mem is building the world’s first self-organizing workspace. Starting with personal notes, Mem uses advanced AI to organize, make sense of, and predict which information will be most relevant to a user at any given moment or in any given context. Mem’s mission is to build products that inspire humans to create more, think better, and spend less time searching and organizing.
Speak is on a mission to help more people become fluent in new languages, starting with English. The company initially launched in East Asia with a focus on South Korea, and has nearly 100,000 paying subscribers. Speak is creating an AI tutor that can have open-ended conversations with learners on any number of topics, providing real-time feedback on pronunciation, grammar, vocabulary, and more.
There are more apps for your inspiration at https://openai.com/blog/gpt-3-apps/ and as more companies discover the benefits of using OpenAI and ChatGPT on Azure, we could expect to see even more innovative use cases and success stories in the future. You can also have a look at https://gpt3demo.com/ and https://gptcrush.com/ for a list with apps that use GPT-3
Open AI Competition
Here are the main competitive solutions to the Open AI GPT-3 one:
Anthropic AI: Its language model performance seem to be on the top 3 at the moment
Google: Google has several AI models, such as BERT, T5 , which are similar to Open AI’s GPT-3
Microsoft: Microsoft has the Azure Cognitive Services, which include LUIS (Language Understanding) and Text Analytics
Amazon: Amazon has Amazon Lex and Amazon Transcribe which are similar to Open AI’s GPT-3
Facebook: Facebook has an AI platform which includes RoBERTa, which is similar to Open AI’s GPT-3
IBM: IBM watson is the relevant solution here, which offers natural language processing, computer vision and other tasks
Baidu: Baidu is a Chinese company with several research teams on AI, and the model similar to Open AI’s GPT-3 is called ERNIE.
A summary of all the language models can be found here and an assessment of language models from Stanford university can be found here
State of Language Models as of 2023
Limitations of OpenAI & ChatGPT
OpenAI and ChatGPT are powerful AI tools, but like any technology, they have some limitations. Some of the main limitations of OpenAI and ChatGPT include:
1. Data bias: OpenAI and ChatGPT are trained on large datasets of internet text, which can introduce bias into the model’s predictions and outputs. This can be particularly problematic for sensitive applications, such as healthcare or financial analysis.
For example, if a chatbot that uses GPT-3 is trained on a dataset that is mostly written by men, it may not be able to understand or respond well to text written by women, or it may generate responses that are insensitive or offensive to women. Another example of bias could be in a language model that is trained on a dataset that is mostly written in English, it may not be able to understand or respond well to text written in other languages, or it may generate responses that are not accurate or appropriate for that language.
2. Lack of control: OpenAI and ChatGPT are pre-trained models, which means that users have limited control over the specific parameters and settings of the model. This can make it difficult for users to fine-tune the model for specific use cases.
For example, a startup that wants to create a chatbot that can hold natural conversations with users in a specific industry, such as finance, may not be able to fine-tune the model to understand and respond to specific industry-specific phrases and jargon. This can make it difficult for the chatbot to hold natural conversations with users in that industry and make the experience less personalized for the users.
Another example could be if a startup wants to create a language model that can generate text in a specific tone or style, such as a formal tone, it may not be able to fine-tune the model to generate text in that specific tone. This can make it difficult for the model to generate text that is appropriate for the desired use case.
It’s important to note that OpenAI has released a number of tools and guidelines for fine-tuning GPT-3 for specific use cases, such as GPT-3 fine-tune, which allows developers to fine-tune the model on their own dataset, making the model more suitable for their specific task. Additionally, OpenAI also offers several options for controlling the temperature and other parameters of the model to make it more suitable for specific use cases.
3. Limited interpretability: OpenAI and ChatGPT are neural networks, which means that they can be difficult to interpret and understand. This can make it difficult for users to understand how the model is making its predictions and to identify and correct errors.
4.High cost: OpenAI and ChatGPT are commercial products, which means that they can be quite expensive to use, depending ofcourse on the use case. This can be a barrier for some startups and small businesses that may not have the budget to use them.
5. History of Data Training: The model is trained with data until 2021, so it misses a couple of years of new events and information.
6. Prompt Size: The OpenAI maximum length of text prompt size is 2048 tokens which is around 1500 words.
In my discussions with software houses that already have an on-premise software solution, the question of “Why should we move to the cloud?”, often comes up.
I know it is 2019 and in many parts of the world the discussion is not “Why Cloud?” but “Which Cloud?“, but a lot of software companies founded before 2005 are still having their solution on-premise.
The more “traditional” the software house, the more years in the business, and the more customers it has on an on-premise solution, the bigger the resistance to move to the cloud world.
And I cannot blame the resistance to change.
On-premise software has a higher profit margin than a cloud, software-as-a-service solution. The developers have to build new skills to develop optimally for the cloud. The IT people supporting and developing the solution will have to learn about the cloud infrastructure and DevOps world. The sellers have to change from calling and meeting customers face-to-face, to delivering online demos and following up leads that come from the web campaigns. And the marketing has to transform to a seller-centric department that creates sales funnels and measures web visitors, customer acquisition costs, lifetime value, campaign costs, A/B testing of websites, A/B testing of online campaigns and so much more.
It’s not an easy task. But given all the benefits and the latest trends, it’s an almost inevitable one.
So, here are the top ten reasons that software houses choose to move their solutions to the cloud world. It is a mix of reasons that are valid for IaaS and/or SaaS cases.
#1 Predictability in Revenues
Selling on-premise software, even with a software assurance or a maintenance contract, makes it difficult to predict when you are going to get paid again in the future. New users may use the product without declaring the licenses, and the new upgraded versions are not easily bought by the customers. Moreover, you cannot predict safely the quarter in which the sales will land; and it’s pretty easy for the deals to slip for some months and affect your forecast predictability.
When selling your software as a service, via the cloud, you usually have monthly payments and you build an “Annuity Business”. Or the famous MRR, which stands for the Monthly Recurring Revenues. The equivalent of collecting a monthly “rent”. Knowing that I have 1000 users paying 19 euros per month for my SaaS solution, and my churn rate and monthly growth, makes it pretty easy to predict the revenues in the forthcoming months and quarters. This affects the cash flows and the financial health of the company, as it “protects” it from the effects of a bad sales forecast predictability.
#2 Shorter Sales Cycles thanks to the free trial
A typical sales cycle of an on-premise software solution to a private sector customer is around four to six months, starting from the first meeting to pitch the solution, until the moment that you see the first euro. Yes, I know that many times it can be 12 to 18 months (especially with enterprise solutions) and in the Public Sector case, we count years instead of months. But let’s keep the six months example.
Most SAAS solutions offer a free trial for some days, weeks or months. Usually a 15 day or 30-day free trial. This makes it easy for a potential customer to try the software and decide if it fits his needs or not. And the customer is psychologically driven to make a go or no-go decision after thirty days, especially if there is a discounted price.
We all know the effects of the free-trial but let’s think about it for a moment with a different example. Imagine that you want to buy a new TV and Samsung brings you the TV for free for 30 days in your living room and gives you a 20% discount to purchase it. Otherwise, you can just say no and Samsung will come and pick it up from your house at no cost.
Compare this to the state of “I am thinking about buying a new TV“, or “talking to sellers in retail shops about TVs“. Which one would create a faster sales cycle?
This is what is happening with the free-trial effect on your sales cycle.
#3 Easier Up-Selling of the more expensive versions
Up-selling is the sales tactic of selling the more expensive version of your product. You go to MacDonalds for a single burger and you end up buying a double one. You get the basic low-definition Netflix subscription for 7.99 USD/month and then you get sold the HD one for 11.99USD/month.
In the on-premise world, upselling means that you have to explain the additional features to the customer, set up a new contract, negotiate discounts, get new internal approvals, deal with technical IT issues. The customers will be waiting for the renewal date of the previous contract to decide on new products. And when you are trying to upsell your new version, you often get quotes such as “let’s wait for others to use it first, maybe it’s too soon and it has bugs,” or “I need one feature but not of all the new ones, and I am doing my job fine with the old version”.
It’s more of a psychological game. If you are used to buying a TV every 3 to 5 years, you will not easily “upgrade” after 12 months. On the other side, if you were paying 15 euros per month to own your TV and someone gave you a new bigger and better one after 12 months for 18 euros/month, you would most probably get the deal.
#4 Easier Cross-Selling of other solutions
Cross-selling is the sales tactic of getting sold a complimentary product. You go to McDonald’s for a burger and you always get asked: “do you want fries with it”?
Cross-selling in the online saas world is pretty straightforward. You go to Amazon to buy a book and you are getting sold a Kindle, an Amazon Music subscription, and more relevant books. The online process of cross-selling is all automated and non-dependent on sellers’ incentives and quotas.
In the on-premise world, there are some obstacles. E.g., a seller who sells Product A might not be incentivized to sell Product B, if it is not in his quota and sales targets. Moreover, the seller may not want to spend the time to get to know Product B, C, D of the company. Even if the seller knows B,C,D and they are in his sales targets, if they are not easily sold and they risk Product A, which is 80% of his revenue, he might not even bring them into the discussion with the customer.
#5 Global Reach
Just try to sell on-premise software to another country. You will most probably need to set up an office, create a partner channel and find system integrators to work with your products. This means that you end up paying up-front, a big cost, for taking the risk to expand your business in a new country. And if you fail to set up a business in Country A, you have less power to go to Country B and then to Country C.
Enter the SAAS world. You localize the saas solution, the sales funnel and the ads and you try your luck in Country A. You fail fast with a small cost. You go to Country B. And C. And D. Until you find success. Ok, it’s never as simple as that but you get the idea. Trial and error in going global costs much less in the cloud world.
#6 Easier way to create a sales funnel and measure results. Marketing is the New Sales.
In the on-premise world, your sellers (internal or partner sellers), keep the keys to your customers. Are you sure that if all your sellers disappear today, you would continue to sell in the coming days? Most probably not.
In the cloud world, and in SaaS applications, you can build a predictable sales engine. You can build a marketing funnel, which you feed with a specific input (money for ads, keywords, content marketing, social media campaigns, adwords), and you get a predictable output (number of web visitors, number of trials, conversion rates from visitors to trial and to paid customer). You can optimize this by A/B testing and trial-error mechanisms.
In the cloud, you have a smaller dependency on your sales team and a bigger dependency on your online marketing team.
Actually, Marketing is the New Sales in the cloud world.
#7 Lower adoption cost for the end customer
A software solution can be deployed in 4 ways:
As an on-premise solution to the software solution provider’s own infrastructure
As an on-premise solution to the infrastructure of the customer
As a cloud solution to the cloud tenant of the software provider (ISV)
As a cloud solution to the cloud tenant of the customer
Let’s suppose we are trying to sell a CRM solution to a bank. If it is an on-premise solution to the bank’s infrastructure, then the bank will have to find available servers or purchase new ones, set up the environment, deploy the solution with the help of the ISV and monitor the infrastructure as the solution operates.
This demands time, energy, money, and human resources from the bank.
Now, compare this to using an online CRM solution, from the cloud tenant of the ISV. The IT of the bank doesn’t have to run/install/deploy the infrastructure, and the bank doesn’t have to invest new resources in hardware.
This leads to a lower adoption cost for the customer in almost all cases if the TCO analysis exercise is done correctly.
#8 Lower TCO for the customer
Buying hardware to host your solution is a CAPEX (capital expenses) move. You pay all the costs upfront for at least the next three to five years.
CAPEX costs include: Server Costs, Storage Costs, Network Costs, Backup and Archive Costs, Disaster Recovery Costs, Datacenter Infrastructure Costs, and Technical IT people Costs
Going to the cloud is an OPEX (operating expenses) move. You pay each month for what you use.
OPEX Costs Include: Leasing of Software and Custom Solutions, Scaling charges based on usage and demand.
In almost all cases, when you do the exercise of the Total Cost of Ownership, for having an on-premise environment VS a cloud environment, the return on investment is better in the cloud world after two-three years.
Below you can have a look at the challenge of the Capex model, especially when there are fluctuations of demand over time.
Examples of extreme fluctuations in demand are:
Black Friday for retail websites
A local news site with a video that goes worldwide viral
National Exams Results for University entry (Ministry of Education website)
Teacher allocation/recruitment results (Ministry of Education Website)
#9 Improve your product by aggregating data on your users’ behavior. Shorten the product development cycle.
In the cloud world, it is pretty easy to get data on how users are interacting with your solution. This speeds up the product development cycle, even up to 33%.
This means less dev time needed and lower development costs.
#10 Attract more easily external investors
If I was to invest in a software solution that is claiming that it can scale to worldwide levels, I would find it very strange if it was an on-premise solution. VCs and tech due diligence consultants show a heavy preference for cloud-based solutions vis-a-vis the on-premise ones.
Moreover, going to the cloud will help you move to a recurring revenue business model. And there are companies such as Pipe, which let you sell in advance your recurring revenue to investors, and fuel your growth without VCs.
Recurring revenue on cloud solutions is the new sought-after currency
#11 ISO and other Certifications
A lot of software solution vendors have to comply with ISO certifications and there are audits to pass every now and then. Being on the cloud, vs being on-premise makes it much easier to comply with the requirements of the audits. Usually, auditors just pass through to the next steps when they hear that your data are on the cloud, secured by a trustworthy vendor.
#12 Security
With the on-premise solutions, you have to take care of all the security work to keep your data and applications safe. In the cloud world, this becomes less troublesome.
Below you can find my notes from a very interesting online course on AI, from Andrew NG, an AI team leader at some of the most prominent AI projects in the world, which is available here.
So, let’s have a look at what AI is all about and what AI can do for your company.
Let’s start with defining what is Machine Learning, the most common tool of AI today.
Machine Learning
What is Machine Learning? It is just a tool of AI. And what does this tool do? The most common use of this tool (ML), is what is called “Supervised Learning”. This means learning how to go from a Point A to a Point B. Or, from an input to an output.
Here are some examples of Supervised Learning applications:
Giving to a spam filtering application an email (input) and deciding if this a spam or not (output).
Giving to a speech recognition an audio file (input) and getting the text transcript (output).
Giving to a translation application an English text (input) and getting the Chinese translation (output).
Giving to a self-driving car image and radar information (input) and getting the location of other vehicles (output).
The trick here is obviously how to get the best possible output from my input. To do that, I need to train the system so that it develops its own brain (neural network). And the more data I have to train my application, the better the output will be. If I use 50 images of dogs to train a dog image recognition application to decide if an animal is a dog or a wolf, the system will not behave as well as if I use 10.000 images of dogs (and wolves) to train it.
And this is why
everyone says that big data is the new oil. If I know how to use my large
datasets correctly, I can develop several “brains” in my company, to
perform tasks for me.
How do I get the data I need to train my application?
Let’s suppose that you want to build an AI application that decides if an animal is a cat or not. A cat detector. There are three ways to get the data to train your application.
Method 1: Manual Labeling of the Data
This means that you
take 1000 pictures of cats, dogs and other animals and then you label each one
whether it is a cat or not. You then feed this information to your AI
application. Obviously, this method needs a lot of work.
Method 2: Observing behaviors
I might have a video
recorder that monitors the animal and observes the behavior. Does the animal
jump 3 meters up to a tree? Does it sleep on the top of your refrigerator? Does
it sleep many hours close to the fireplace?
Method 3: Downloading the data from the web or getting them from a partner
For many
applications, it is often easy to find large data sets available for free on
the web. E.g., stock market prices, real estate prices, images, temperature
data, seismic data etc.
How to make good use of your data for AI applications
A lot of companies think “I have a lot of collected data, so that means I can throw them to an AI team and they can create something AI-fantastic for me”. That’s not always true. More data is better than less data, but a lot of data doesn’t mean necessarily that you can build a useful AI application.
As an example, I may have a factory, and an IOT sensor that gathers data from my engine every 10 minutes. This is obviously a lot of data after some years of operation. But if I want to build a predictive maintenance application that tries to forecast when my engine has the risk of failure and needs to be services, I might need data from my engine every 1 minute. Or every one second.
So, the
recommendation here, is that once your IT team has some data, you give them to
your AI team, and then the AI team comes back with recommendations about how to
collect data that will be useful for what they are trying to build.
Moreover, the data
that you want to feed to your AI system have to be cleaned. There must be no
wrong labels, or missing information because the AI system will learn the wrong
things then. It’s like training my kid to learn a new language and teaching to
spell wrong the word “apple”. Or by mistake, teaching her to see an
apple and calling it an “orange”.
The AI applications work with two types of data: The structured and the unstructured data.
Structured data is whatever can be put in an excel table: Prices of houses, square meters, number of bedrooms.
Unstructured data is images, videos, audio and text. Videos of houses, images of furniture, text descriptions of houses for rent.
Terminology of AI
The most common
terms you will hear in AI, are “Deep learning”, “Neural
Networks”, “Machine Learning”, “Data Science”.
Machine
Learning vs Data Science
Let’s start with the
difference of Machine Learning vs Data Science projects. Let’s suppose that you
have an excel with data about houses: Number of bedrooms, square meters, year
built, year of being renovated and prices.
A machine learning
project would create a SOFTWARE that
helps go from point A to point B. E.g., a software in which you input the data
of number of bedrooms, square meters, year built and it suggests the right
price.
A data science
project would help you create a POWERPOINT
with insights about the data. E.g., you may look into data and find out that if
you have renovated your house in the past 5 years, you win a 15% premium in the
price. So, data science projects help you create insights that drive business
decisions.
Neural
Network vs Deep Learning.
Actually, this is
the same thing. It’s just that Deep Learning is the new way to call a neural
network. Maybe it sounds more impressive to say that I deal with “Deep
Learning” than “Neural Networks” as a scientist.
And what is a neural
network or a deep learning system? It is actually a big mathematical equation,
that tries to help us go from point A to point B. E.g., if I have the input of
house data (number of bedrooms, square meters, year renovated) as Point A and I
would like to predict the price (Point B), the system that does the
calculations and tries to do that is called a “Neural network” or a
“deep learning system”.
The term
“neural” comes from the “neurons” that we have in our
brains. So, as our brains have neurons that connect with each other (well, at
least usually….), and try to create an output based on specific input, the same
is the task with an Artificial Neural Network. Of course, this is just a
metaphor and the actual way of operation of a human neural network has nothing
to do with the artificial neural network way of operation.
Other buzzwords in
AI are “Unsupervised Learning”, “Reinforcement Learning”,
“Knowledge graphs” etc. These are just other tools to make computers
think smarter. But the two most important AI tools are “Machine Learning”
and “Neural Networks”.
Summarizing, Neural Network (or deep learning) is a subset of Machine Learning, which is a subset of AI.
What Makes an AI company?
The previous
technology era has been the “Internet era”. And now we are into the
“AI era”. In the internet era, if I had let’s say a shopping mall and
threw in a website, this wouldn’t mean that I have become instantly an internet
company. An internet company in the retail (such as Amazon for example), would
actually use all the good staff that internet was made to provide. This would
be A/B testing for products in my website, short iteration times in launching
solutions and products on the web, and push of decision making to the engineers
and the product managers, instead of having the CEO make all important business
decisions.
Similarly, in the
“AI era”, throwing a Deep Learning system in my company doesn’t make
me an AI company. To get closer to becoming an AI company, I will need to:
Have a strategy on how to acquire data. I might even have to launch products and solutions that don’t make money, so that I collect the strategic data I need for other solutions that I can monetize.
Build a unified data warehouse: If I have several different databases, that are supervised by many different owners, an engineer could never built a solid AI system. All the data need to be brought into a single place.
Automate whatever can be done by a computer vs a human.
Create new roles in my company, such as the Machine Learning Engineer.
The multi-year
process to become a good AI company for Microsoft, Google, Baidu and other
companies has five steps:
Step 1: Create a few small AI projects. Have your teams create them, so that everyone gets an idea what the company could do with AI. This could be done by an internal team of engineers or you can outsource it to an external team.
Step 2: Build an in house AI team.
Step 3: Provide AI training to a broader set of employees, including engineers, managers, business decision makers.
Step 4: Develop AI strategy
Step 5: Develop internal and external AI communications
What Machine Learning Can and Cannot Do
The rule of thumb is that whatever a human can do with giving it one second to think, we can automate it with AI supervised learning. For example, if a human can recognize a scratch in a phone in a second, we can automate it. If a driver can find the position of other cars in a second, we can automatize it. If I can recognize the speech in a couple of seconds, we can automate it with AI.
AI cannot successfully do things that take a lot of hours for a human to conduct. As an example, a human would need many hours of thinking to write a 50-page report with a market research analysis. It would be impossible for an AI system to do this.
In another example,
if I am building a self-driving car, AI can help me build a system where I put
as input the data (images, radar information) from a car in front of me and the
output would be the answer to “where is the car”. What AI could not
do, is to understand the intention behind a gesture. If I see a police man
raising his hand, a human being could easily understand that this is a
“stop sign” but AI could not figure out the intention. The same goes
if I see a bicyclist raising her hand to the left, signaling that she wants to
turn left. An AI system could not understand the intention behind this. The
problem here for AI is that there is no parity in the interpretation of human
gestures. Gestures might mean different things in different cultures and there
are many variables of gestures for an AI system to work effectively.
Obviously, if I
narrow down this system to the level that a Kinect game could understand that
when I raise my hand up it means “up”, this is possible for an AI
system to handle.
Building AI Projects
Workflow of a Machine Learning Project
The key steps of an ML project are three: Collect the data, Train the Model and Deploy the model. Let’s see the example of trying to build a speech recognition ML for Amazon’s echo, to recognize the “Hey Alexa” initiation command.
The first step would
be to collect the data. That means, I would have to ask 1000 people to say
“Hello”, or “Hi”, and “Alexa” and get the audio
clips.
The second step
would be to train the model. I want my system to understand if someone said
indeed “Hello Alexa”, so as to initiate the speaker system. This
means that I will need to build a supervised learning system, that has as input
the audio files I collected from people saying “Hello”, Hi”,
“Alexa” and has as an output whether the user said indeed “Hello
Alexa”.
The third system
would be to deploy the system. So, this would be in our case to deploy the
software in a speaker and ship it to the users. At this point, I might have
problems with my application. As an example, I might have used American accent
audio files to train my model, and ship it to Greece, where people speak
English with a different accent. In this case, I would need to collect the new
data again (from Greeks speaking English), update my model and relaunch it.
Specific AI Examples per Industry or Job Function
Here are some
examples on how ML could help several job functions or industries:
Manufacturing: ML can help with visual recognition of defective products in the production line.
Recruiting: ML can help with deciding which CVs to look at.
Marketing: ML can help with A/B testing of websites
Agriculture: ML can help with precision agriculture. E.g, recognizing a specific spot in the field that needs to be sprayed. Moreover, ML can help with crop analytics.
Sales: ML can help with prioritizing leads to contact in an organization. Or with sales forecasting.
Travel: ML can help with supporting travel chatbots to provide answers about travel destinations.
How to Choose an AI project for your Business
AI can do a lot of things. But what is relevant to your own business? Getting ideas about AI projects doesn’t happen by luck so you need to work for it. You will need to brainstorm, and to do it effectively you will need two teams: An AI team and a Domain Experts team. E.g., if you are brainstorming ideas for AI in your marketing, you will need the AI team that knows what is possible to do with AI and the marketing people (product managers, online sales managers, etc) in the same room.
The suggested
brainstorming framework for an AI project, has the 3 following axis:
Think about Automating Tasks and not automating jobs. For example, if you are brainstorming on AI projects for your call center, think about the several tasks that take place (picking up the phone, searching the CRM, updating the CRM, opening a support ticket, sending emails, issuing refunds, call routing, email routing etc) and which of these can be automated. In this case, call routing or email routing would be the most appropriate for an AI project.
Ask the question: “What are the main drivers of business value for my company?”. If I can find an AI project that supports one of these drivers (e.g. customer satisfaction, technical support, speed of delivery), the value will be tremendous.
Ask the question: “What are the main pain points of my company?”.
It is important to
note here that you don’t necessarily need big data to start an AI project. Even
with small datasets you could start seeing meaningful results. The amount of
data you need for an AI project is problem-specific and you will need to ask your
AI expert about how much data you need to train the model.
How to decide if it makes sense to start a specific AI
project
An AI project might
take some days or some months of work to get completed. Before jumping on it,
you will need to do your due diligence on a technical and business level.
-Technical Diligence: Talk to your AI experts
to understand whether it is possible on a technical level to achieve the
desired outcome. As an example, the Word Error Rate for humans is 4%. Meaning,
that out of the 100 words that a person listens to, 4 of them might be misinterpreted.
AI has helped systems to achieve the same error rate. But if you expect that
you can do 0.01% Word Error Rate, you need to break the world record many times
to do that and on a technical level it would be unachievable for your company.
The second question here is “How much data do I need?”, and “Do
I have a way to get this amount of data?”. The third question to answer is
“How much time do I need and how many engineers to build this AI
solution?”.
-Business Diligence: You will have to quantify
with real KPIs, what would be the business ROI of your AI investment. So, if I
indeed managed to succeed with my AI marketing project, and I can drive 10%
more website customers to the shopping cart, how much additional money would
that mean for my business?
-Ethical Diligence: AI can do a lot of things,
but would I want to use AI to make gamblers who have an addiction problem,
visiting more often my betting site?
Should I do my AI Project in-house or Outsource it?
Both approaches
could work. With outsourced ML projects you get faster access to talent and if
you run successfully a couple of AI projects, then you can start building your
own internal AI team. Data Science projects on the other hand are usually built
in-house. This is due to the very close business input and insights you need.
What Data will an AI team ask from you as a business
owner?
If you start working
with an AI team, you will be asked for two types of data. A specific dataset to
be used as a “Training Dataset”, and another dataset to be used as a
“Test Dataset”.
Should I Expect 100% Accuracy in my AI solution?
No. It’s a common
mistake that many expect AI to be 100% accurate, e.g. in understanding
language, images, automating tasks, finding defective products, email and call
routing and other applications. This is not the case as there are limitations
to AI, due to data mislabeling, ambiguity in data, wrong data input and other.
Instead, you should focus on an accepted level of accuracy that would provide
huge business value for you. Yes, you would love to have 100% accuracy, but
maybe 95% will hit all your business targets as well.
AI Technical Tools
Some
of the most common open-source frameworks used for ML, are TensorFlow, PyTorch, Keras,
MXNet, CNTK, Caffe,
PaddlePaddle,
Scikit-learn, R or Weka. A lot of research publications are available at https://arxiv.org. Many teams publish their code on
Github (github.com).
Moreover, an AI
project can be deployed on-premise, on the cloud, or on Edge. On-premise means
that you install it on your own servers at your company, Cloud means that you
use the infrastructure of a cloud provider (e.g. Microsoft Azure) and Edge
deployment means that you put a processor where the data is (e.g. if I have a
smart speaker at my home, I put a processor inside the speaker instead of
sending the files via the cloud, getting the results and resending them back to
the speaker). Generally, the world is going onto cloud solutions for AI and to
Edge deployments where cloud doesn’t make sense (e.g. self-driving cars, smart
speakers etc).
Roles in an AI Team
Usually, in a large AII team you need Software Engineers, Machine Learning Engineers, Machine Learning Researchers, Data Scientists, Data Engineers, AI Product Manager. Of course, you can always start with just a Machine Learning Engineer.
We had the TIE Athens Bootcamp yesterday and one of the subject that was discussed is: I am startup, my brand is not that well-known, so how can I make others (website visitors/investors/potential customers) trust me? Here are some ideas on how you can build trust for your startup:
In a few days the 5th Athens Startup Weekend takes place. ASW is a 54 hour event which can help you to get your business idea to the next step. At ASW you can pitch your idea, get developers/designers/business people in your team, get support from advisors and work intensively on your idea for 54 hours. At the end, you present it to a panel of judges and get feedback and extra support. Read more
3d printing is the next trillion dollar opportunity according to many economists. This is a post to raise awareness to the greek startup ecosystem and examine some business opportunities. Read more
Do you have an idea and you want to build your own startup? Or you already have your own startup and want to take it to the next level? Here is a list of resources that can help you: Read more
Tι είναι το Thermi – Link και πώς μπορεί να σας βοηθήσει? Είναι ένας χώρος co-working – σκεφτείτε κάτι αντίστοιχο του Colab που υπάρχει στην Αθήνα (www.colab.gr ). Σας παρέχει γραφεία, ίντερνετ και τα βασικά facilities που χρειάζεστε για να ξεκινήσετε να δουλεύετε με την ομάδα σας πάνω στο project σας. Read more