Thursday, February 27, 2025

Key phrase extraction with Azure Cognitive Services

Don't have Azure? Grab a free subscription.

I recently took a look at Text Analysis that was introduced with Cognitive Services and is now inside the Azure portal. If you open the Azure portal and look for AI and Machine Learning then you'll see the following:

Let's give Text Analysis a spin. Open the blade and fill out the following info. Be sure to select the Free tier (F0) as shown below:

Select Keys and copy the value of Key 1.

We'll use Postman to test. Go ahead and download it if you haven't already and once complete you'll use one of the following endpoints depending on what you want to use.

https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment
https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases
https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/languages

We'll use the keyPhrases endpoint for learning purposes.

What are Key Phrases? They automatically extract key phrases to quickly identify the main points.

Copy the https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases url into Postman and set the following three header properties:

  • Ocp-Apim-Subscription-Key = should be your Key 1 (from our discussion earlier).
  • Content-Type = Set it to application/json.
  • Accept = Set it to application/json.

Your screen should look like the following:

Now switch over to Body, then Raw and post the following JSON (from some of my recent tweets):

{
  "documents": [
    {
      "language": "en",
      "id": "1",
      "text": "Top 10 .NET Development Tweets that Broke the Web in 2017 - http://mcrump.me/2ot58Co  #dotnet"
    },
    {
      "language": "en",
      "id": "2",
      "text": "Setting up a managed container cluster with AKS and Kubernetes in the #Azure Cloud running .NET Core in minutes - http://mcrump.me/2op9mek  #dotnet"
    }
  ]
}

Now press Send and it will return key phrases from my tweets.

{
  "documents": [
    {
      "keyPhrases": ["Web", ".NET Development Tweets", "dotnet"],
      "id": "1"
    },
    {
      "keyPhrases": [
        "Kubernetes",
        "Azure Cloud",
        "minutes -",
        "AKS",
        ".NET Core"
      ],
      "id": "2"
    }
  ],
  "errors": []
}

Want more Cognitive Services? Check out our quickstarts and tutorials!


We'll be posting articles every day in April, so stay tuned or jump ahead and check out more tips and tricks now.

Friday, January 24, 2025

Create a summary of your document with Copilot in Word

 

Applies To

You can ask Copilot to create a summary—or when you open a Word document, you can get a summary at the top of the document.

Ask Copilot to create a summary or find references

Paused

Note: This feature is available to customers with either a Microsoft 365 Copilot (work) license or a Copilot Pro (home) license.

View and use the automatic summary

If you just received a lengthy document to review, you can save time by letting Copilot in Word help you distill it down to key points.

Note: This Copilot summary is available to customers with a Microsoft 365 Copilot (work) license. We're rolling the automatic summary out gradually, so if you don't see it yet, keep watching for it in an update (see Delivering continuous innovation in Windows 11 for more information). We're rolling it out slowly.

When you open a document, you can view a summary—which you can scan to see what topics are in the document. If you're the author of a long document, you can use the summary to quickly see if the content is organized the way you want.

The summary will be in a collapsed or partially open section at the top of the page. If you saved the document on OneDrive or SharePoint, you'll see that it's already generated. Otherwise, it'll generate when you open it. Select View More to see the full summary, and if you want to customize it or ask follow-up questions about the documents, select Open in chat at the bottom of the summary and enter a prompt.

Shows a collapsed Copilot summary at the top of a Word document. A "View more" button allows the summary to be expanded.

Tip: If you aren't finding the summaries useful, you can choose how you see them for all documents. For documents saved on OneDrive or SharePoint, the summaries are cached so that your computer performance remains as fast as possible.

Ask Copilot for a summary if you don't see one automatically

You can see the automatic summaries if your document was saved on OneDrive or SharePoint and you have a Microsoft 365 Copilot (work) license—or you'll see an option to generate a summary for documents saved elsewhere.

If you still aren't seeing a summary, you can ask Copilot to show you one.

  1. Select Copilot from the ribbon to open the Copilot pane.

  2. In the Copilot compose box, enter "summarize this document" or click on the suggestion option.

Want to learn more about a summarized idea? In Copilot's summary, select References to view citations that Copilot pulled information from inside the document.

Create a summary when sharing a document

Note: This feature is currently only available to customers with a Microsoft 365 Copilot (work) license. An open, generated summary will only show if the document has been saved in OneDrive or SharePoint. Otherwise, it'll generate when you open it.

Copilot can generate summaries when you share an unencrypted document with collaborators. Encrypted documents aren't supported at this time.

  1. In an existing Word document, select Share, then in the list, select Share.

  2. Select the Logo icon for Copilot in WordCopilot icon inside the Add a message box. Copilot generates a summary of the document for easier sharing.Shows the message box you see in Word when you share a document. The Copilot icon is in the message box.

  3. (Optional) Edit the summary as needed.​​​​​​​ Shows the box you see in Word when you share a document. A summary is shown in the box.

After you've shared the Word file, the email notification to your collaborators includes the summary generated by Copilot.

Review the results

Review the summary Copilot generated. How does it flow? Is it ready to share? Or does it need a little more work? Often the first response isn't perfect. AI works best with a little back-and-forth conversation. You can get better results by providing more context and details about what you want.

Provide more context and details

Include some context and a few details in your prompts to get better results with Copilot. Who's the summary for? Why do you need it? How do you plan to use it? Try using prompts like these:

  • What should business decision makers know about <subject in your document>? Why is it important to understand these things?

  • I need to share the main points of this document with my teammates. Write a few paragraphs that include why these points are important to our company.

Are there any calls to action? What should we do next?

With each prompt, Copilot scans the document again, and then generates a response. You can continue submitting prompts until you're pleased with the results.

Try suggested prompts

Copilot offers suggested prompts to try, or you can always type your own prompts for Copilot. With each response, you'll see one or more suggested prompts. Give one a try and see what happens.

Choose whether summaries show automatically

You can choose if you want to see the summaries automatically, or if you want to show them always in a collapsed state.

  • In the automatic summary, select Settings (...) and then select the box for Collapse Copilot summary automatically.

  • When you're using Word for the web, select the arrow next to the Copilot icon in the ribbon, select Copilot Settings, and select the box for Collapse Copilot summary automatically.Shows the Copilot option on the Word ribbon, with the Settings option selected in the dropdown list.​​​​​​​

If you're using a Mac or a device that's running Win32, the steps are a little different:

  • On Win32, go to File Options Copilot, and select the box for Collapse Copilot summary automatically.

  • On Mac, go to Word Word Preferences > Copilot, and select the box for Collapse Copilot summary automatically.

C

Friday, April 26, 2024

Technology’s generational moment with generative AI: A CIO and CTO guide

 Ref: A CIO and CTO technology guide to generative AI | McKinsey

1. Determine the company’s posture for the adoption of generative AI


As use of generative AI becomes increasingly widespread, we have seen CIOs and CTOs respond by blocking employee access to publicly available applications to limit risk. In doing so, these companies risk missing out on opportunities for innovation, with some employees even perceiving these moves as limiting their ability to build important new skills.

Instead, CIOs and CTOs should work with risk leaders to balance the real need for risk mitigation with the importance of building generative AI skills in the business. This requires establishing the company’s posture regarding generative AI by building consensus around the levels of risk with which the business is comfortable and how generative AI fits into the business’s overall strategy. This step allows the business to quickly determine company-wide policies and guidelines.

Once policies are clearly defined, leaders should communicate them to the business, with the CIO and CTO providing the organization with appropriate access and user-friendly guidelines. Some companies have rolled out firmwide communications about generative AI, provided broad access to generative AI for specific user groups, created pop-ups that warn users any time they input internal data into a model, and built a guidelines page that appears each time users access a publicly available generative AI service.

2. Identify use cases that build value through improved productivity, growth, and new business models


CIOs and CTOs should be the antidote to the “death by use case” frenzy that we already see in many companies. They can be most helpful by working with the CEO, CFO, and other business leaders to think through how generative AI challenges existing business models, opens doors to new ones, and creates new sources of value. With a deep understanding of the technical possibilities, the CIO and CTO should identify the most valuable opportunities and issues across the company that can benefit from generative AI—and those that can’t. In some cases, generative AI is not the best option.

McKinsey research, for example, shows generative AI can lift productivity for certain marketing use cases (for example, by analyzing unstructured and abstract data for customer preference) by roughly 10 percent and customer support (for example, through intelligent bots) by up to 40 percent.2 The CIO and CTO can be particularly helpful in developing a perspective on how best to cluster use cases either by domain (such as customer journey or business process) or use case type (such as creative content creation or virtual agents) so that generative AI will have the most value. Identifying opportunities won’t be the most strategic task—there are many generative AI use cases out there—but, given initial limitations of talent and capabilities, the CIO and CTO will need to provide feasibility and resource estimates to help the business sequence generative AI priorities.

Providing this level of counsel requires tech leaders to work with the business to develop a FinAI capability to estimate the true costs and returns on generative AI initiatives. Cost calculations can be particularly complex because the unit economics must account for multiple model and vendor costs, model interactions (where a query might require input from multiple models, each with its own fee), ongoing usage fees, and human oversight costs.

3. Reimagine the technology function


Generative AI has the potential to completely remake how the tech function works. CIOs and CTOs need to make a comprehensive review of the potential impact of generative AI on all areas of tech, but it’s important to take action quickly to build experience and expertise. There are three areas where they can focus their initial energies:

  • Software development: McKinsey research shows generative AI coding support can help software engineers develop code 35 to 45 percent faster, refactor code 20 to 30 percent faster, and perform code documentation 45 to 50 percent faster.3 Generative AI can also automate the testing process and simulate edge cases, allowing teams to develop more-resilient software prior to release, and accelerate the onboarding of new developers (for example, by asking generative AI questions about a code base). Capturing these benefits will require extensive training (see more in action 8) and automation of integration and deployment pipelines through DevSecOps practices to manage the surge in code volume.
  • Technical debt: Technical debt can account for 20 to 40 percent of technology budgets and significantly slow the pace of development.4 CIOs and CTOs should review their tech-debt balance sheets to determine how generative AI capabilities such as code refactoring, code translation, and automated test-case generation can accelerate the reduction of technical debt.
  • IT operations (ITOps): CIOs and CTOs will need to review their ITOps productivity efforts to determine how generative AI can accelerate processes. Generative AI’s capabilities are particularly helpful in automating such tasks as password resets, status requests, or basic diagnostics through self-serve agents; accelerating triage and resolution through improved routing; surfacing useful context, such as topic or priority, and generating suggested responses; improving observability through analysis of vast streams of logs to identify events that truly require attention; and developing documentation, such as standard operating procedures, incident postmortems, or performance reports.

4. Take advantage of existing services or adapt open-source generative AI models


A variation of the classic “rent, buy, or build” decision exists when it comes to strategies for developing generative AI capabilities. The basic rule holds true: a company should invest in a generative AI capability where it can create a proprietary advantage for the business and access existing services for those that are more like commodities.

The CIO and CTO can think through the implications of these options as three archetypes:

  • Taker—uses publicly available models through a chat interface or an API, with little or no customization. Good examples include off-the-shelf solutions to generate code (such as GitHub Copilot) or to assist designers with image generation and editing (such as Adobe Firefly). This is the simplest archetype in terms of both engineering and infrastructure needs and is generally the fastest to get up and running. These models are essentially commodities that rely on feeding data in the form of prompts to the public model.
  • Shaper—integrates models with internal data and systems to generate more customized results. One example is a model that supports sales deals by connecting generative AI tools to customer relationship management (CRM) and financial systems to incorporate customers’ prior sales and engagement history. Another is fine-tuning the model with internal company documents and chat history to act as an assistant to a customer support agent. For companies that are looking to scale generative AI capabilities, develop more proprietary capabilities, or meet higher security or compliance needs, the Shaper archetype is appropriate.

    There are two common approaches for integrating data with generative AI models in this archetype. One is to “bring the model to the data,” where the model is hosted on the organization’s infrastructure, either on-premises or in the cloud environment. Cohere, for example, deploys foundation models on clients’ cloud infrastructure, reducing the need for data transfers. The other approach is to “bring data to the model,” where an organization can aggregate its data and deploy a copy of the large model on cloud infrastructure. Both approaches achieve the goal of providing access to the foundation models, and choosing between them will come down to the organization’s workload footprint.
  • Maker—builds a foundation model to address a discrete business case. Building a foundation model is expensive and complex, requiring huge volumes of data, deep expertise, and massive compute power. This option requires a substantial one-off investment—tens or even hundreds of millions of dollars—to build the model and train it. The cost depends on various factors, such as training infrastructure, model architecture choice, number of model parameters, data size, and expert resources.

Each archetype has its own costs that tech leaders will need to consider (Exhibit 1). While new developments, such as efficient model training approaches and lower graphics processing unit (GPU) compute costs over time, are driving costs down, the inherent complexity of the Maker archetype means that few organizations will adopt it in the short term. Instead, most will turn to some combination of Taker, to quickly access a commodity service, and Shaper, to build a proprietary capability on top of foundation models.

Thursday, March 28, 2024

Swiss Federal Railways CIO relies more on AI than concrete

News Analysis

28 Mar 20244 mins
Artificial IntelligenceCIOData Management

Jochen Decker is fully committed to AI for complex optimization projects to yield measurable cost benefits.

Jochen Decker, CIO, SBB
CREDIT: JAN WASSMUT

Railway construction couldn’t be more laborious than in Switzerland, as the country consists almost exclusively of mountains, most of which are now spanned with bridges and riddled with holes, like the famous local cheese.

The rail network is also the densest in Europe to the point where it can no longer be expanded because all the necessary areas are already fully utilized. “We can only optimize,” said Jochen Decker at the Hamburg IT Strategy Days in February. And this is urgently needed because Swiss Federal Railways (SBB) expects 30 to 40% more passengers in 2034 than today, putting that much additional strain on an unchanged route network. So Decker came to Hamburg to report on how it can be achieved, and show the central role that artificial intelligence will play.

Opportunities not realized before

SBB, unlike Deutsche Bahn, is an integrated group that brings together passenger and freight transport, infrastructure, and real estate under a single roof, which facilitates the planning and implementation of investments and innovations. The IT budget amounts to €850 million per year, which is about 7% of sales.

A few years ago, SBB prescribed three optimization programs that will cost around €1 billion by 2027. In terms of traffic management, the aim is to make better use of routes, in particular by reducing the distances between trains. Production planning wants to get more kilometers out of people and materials, ensuring that trains stand still as little as possible, and that train drivers spend as much of their working time as possible driving rather than on other things. The third part of the program, asset management, is intended to reduce material wear and tear, and make better use of the workshops.

Key phrase extraction with Azure Cognitive Services

Don't have Azure?   Grab a free subscription . I recently took a look at Text Analysis that was introduced with Cognitive Services and i...