Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Introduction

Devtoberfest gave many ideas. One of that is speeding up development with the brand new Fiori MCP Server and coding assistants.

It is about to plan and execute a development task via a chat window in VSCode, which is then turned into the required code changes in your workspace.

Model Context Protocol (MCP)

Large Language Models (LLMs) are using only their own static data encoded in themself to answer Your questions.

To provide more complex answers and solutions, agents can interact with external Servers via an MCP Client. MCP Servers provide additional functionalities and access to real world data. Such way reducing errors and hallucinations of LLMs.

In a given topic like Fiori, an MCP server provides dedicated functionalities to execute and orchestrate complex tasks and supports agentic workflows.

So LLMs still do the actions, decisions and reasoning but the agent driving the LLM can ask Servers via its MCP Client asking for better orchestration of complex topic specific tasks. MCP can support to create the implementation plan.

Software Environment

  • VSCode
  • Fiori MCP Server
  • Agent:
    • Option 1. Cline
      • LLM Server running a model
        • Option A: Remotely using an URL and API key to access an LLM Server listening on the internet. Online LLM providers are available. You get free trial access to some models from cline after registering an account.
        • Option B: locally
    • Option 2. GitHub CoPilot alternative to cline: in case You already subscribed or pay for GitHub Copilot, simply jump to CoPilot section

Before You start

Cline is an agent like GitHub CoPilot. If You’d like to use rather GitHub CoPilot – because You have a subscription already – then simply jump down to Using Fiori MCP Server with GitHub CoPilot, the below configurations will not be required at all.

The authors of Fiori MCP Server are using Cline however. This is what we setup and test using an LLM Server locally. As result we’ll get a completely free local setup. If You do not have enough RAM/VRAM (~34 GB), You will need to use an online LLM provider (Option B)

Fiori MCP Server

Install the node module

npm install -g @sap-ux/fiori-mcp-server

LLM Server

Remote Setup – Option A

In case You do not have a computer with enough resources, or do not want to run local LLM servers, You can still subscribe to one of the online providers. They give limited access by free plans to models via APIs.

If you create a Cline account you get free access to models. After installing Cline extension you can configure it.

If you decide to use a remote server, Skip Option B and start setting up Cline.

Local Setup – Option B

If you want an always free approach, you need go with a local setup owning good computing power. Note that local setups have some limitations. Note You need ~34 GB VRAM dedicated.

Depending on your flavour or hardware, there are several options possible starting from command line utilities until applications having a GUI to run an LLM locally as a server:

  • Ollama (GPU/CPU)
  • LM Studio (Fancy GUI based server, CPU/GPU)
  • Lemonade (a server managing CPU/GPU/NPU/Hybrid runners)
  • FastFlowFM aka. FLM (NPU): for AMD Ryzen AI processor. Lemonade also expose some of their supported models.

In this case we have to use Ollama, please install it.

Reason: only Ollama and an AI expert is providing the Cline local optimized version of the Qwen3 Coder model. It has the required configuration to run stable.

Pull an image with 128k token window support. They have different number of parameters and quantization. I am using 30b version which I consider stable.

Note: Ollama starts and disposes models from memory upon access. It makes sense to configure timeouts

Despite here are some commands to manage your LLM. After running a prompt is automatically opened in the terminal. You can quit it that prompt with /bye.

ollama pull mychen76/qwen3_cline_roocode:30b-128k  
ollama run mychen76/qwen3_cline_roocode:30b-128k  
ollama stop mychen76/qwen3_cline_roocode:30b-128k

Large Language Models (LLMs), three key concepts:

Parameters number: These are the millions or billions of tiny settings (like switches or weights) inside the model that control how it understands and generates language. More parameters usually mean the model can learn more complex patterns but need more computing power.

Token size: Tokens are chunks of text (words or parts of words) the model processes in pieces. Token size refers to how many tokens the model can handle at once in its context window (e.g., 4,000 tokens). This limits how much text the model can understand or generate in one go.

Quantization: This is a method to make the model smaller and faster by reducing the precision of those parameters (the tiny settings) from high-precision numbers to smaller, less precise ones. It slightly reduces accuracy but saves memory and speeds up the model on less powerful hardware.

In simple terms: The model has many tiny knobs to tweak language (parameters), it reads and writes in small text chunks called tokens, and quantization squishes its memory so it runs quicker and lighter.

Cline

Cline is an agent like GitHub CoPilot.

Install it from VSCode extensions marketplace.

image-8-1024x490 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

LLM Configuration

Remote

Select Cline as API Provider and choose a free model.

image-24 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot
Local
API ProviderOllama
Modelmychen76/qwen3_cline_roocode:30b
Model Context Window131072
image-27 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

MCP Configuration

Add Fiori MCP Server to Cline. Click on the little bars icon

image-28-1024x389 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Paste the below configuration, after that the MCP server will be installed and the led indicator will turn green.

{
  "mcpServers": {
    "fiori-mcp-server": {
      "autoApprove": [
        "search_docs",
        "list_functionality",
        "list_fiori_apps",
        "get_functionality_details"
      ],
      "disabled": false,
      "timeout": 600,
      "type": "stdio",
      "command": "fiori-mcp"
    }
  }
}

Rule Configuration

The last step before launching the rockets is to add some rules to Cline, which needs to be considered by the model. You’ll find the starter rules here https://www.npmjs.com/package/@sap-ux/fiori-mcp-server. Look for AGENTS.MD.

Open the rules panel

image-22 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Enter fiori.md and press the button, then paste the rules from the npmjs site.

You can extend these rules over the time based on your experience.

You can enable or disable feeding the rules with the slider.

image-23-1024x556 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Playing with Cline and Fiori MCP Server

Switch to Plan mode, and go through the Auto-approve settings for your safety.

image-21 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Finally You can start discovering Fiori MCP Server. About the initial capabilities watch this Devtoberfest video to get some hints how to utilize it. After testing many local models, I have successfully created – with the above local setup – new Fiori apps using CAP backends and changed existing xml views with chat instructions for example.

Cline Alternative – Using Fiori MCP Server with GitHub CoPilot

If you have a GitHub Copilot subscription already and want to add Fiori MCP Server to CoPilot, simply install it using the built-in command.

image-10-1024x62 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot
image-11 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot
image-12 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Follow the step by step wizard and enter the parameters.

In case the configuration is not as desired, You can change it afterwards from the MCP Server List

image-13 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Afterwards in VSCode open the chat and select the Fiori MCP Server to be used by CoPilot.

image-29 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

I have not tested CoPilot with Fiori MCP Server by myself, but using other MCPs I found CoPilot less explanatory than Cline.

Some Extra

Why not to explore these as well ?!

UI5 MCP Server

https://github.com/UI5/mcp-server

Configuration is similar to Fiori MCP Server. The official documentation gives as an example to execute it via npx, but I definitely do not want to go through the installation/download process every time in each new project upon accessing UI5 MCP Server. Therefore I use it globally. npx is very easy for the software manufacturer to deliver bugfixes although. To keep the globally installed version updated to the latest, simply install it again.

MCP Global Installation

npm install -g @ui5/mcp-server

Cline MCP Server configuration

{
  "mcpServers": {
    "ui5mcp": {
      "autoApprove": [
        "get_api_reference",
        "get_guidelines",
        "get_project_info",
        "get_version_info"
      ],
      "disabled": false,
      "timeout": 600,
      "type": "stdio",
      "command": "ui5mcp",
      "env": {
        "UI5_MCP_SERVER_ALLOWED_ODATA_DOMAINS": "",
        "UI5_LOG_LVL": "verbose",
        "UI5_DATA_DIR": "d:\\MCPServerData\\ui5\\"
      }
    }
  }
}

UI5_MCP_SERVER_ALLOWED_ODATA_DOMAINS is a whitelist of service hosts. Leave empty to allow everything.

UI5_DATA_DIR is not mandatory.

Official Documentation:

  • Default Value: The .ui5 directory in the user’s home directory
  • Description: Directory where the MCP server stores its data, such as cached API references

CAP MCP Server

https://github.com/cap-js/mcp-server

Global installation and configuration can be done like Fiori MCP (see above).

npm i -g @cap-js/mcp-server

Cline MCP Server Config

{
  "mcpServers": {    
    "cds-mcp": {
      "autoApprove": [
        "get_api_reference",
        "get_guidelines",
        "get_project_info",
        "get_version_info",
        "search_model",
        "search_docs"
      ],
      "disabled": false,
      "timeout": 600,
      "type": "stdio",
      "command": "cds-mcp"
    }
  }
}

Cline Rules (cap.md)

# Agent Instructions

## Rules for creation or modification of CAP services

- You MUST search for CDS definitions, like entities, fields and services (which include HTTP endpoints) with cds-mcp, only if it fails you MAY read \*.cds files in the project.
- You MUST search for CAP docs with cds-mcp EVERY TIME you create, modify CDS models or when using APIs or the `cds` CLI from CAP. Do NOT propose, suggest or make any changes without first checking it.

Note: CAP MCP Server – not like Fiori – showed the well known symptoms/errors of quantized local LLMs used with agentic client workflows. Nonsense loops made CAP MCP combined with the local LLM used here unusable. As fallback You can still use CAP MCP properly with fallback to an original LLM remotely.

The free alpha version provided by cline is stable with CAP MCP.

image-30 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot
image-31 Using Fiori MCP Server in VSCode with Cline or GitHub CoPilot

Summary

While running cline with local LLM is possible, due quantization the capabilities are limited comparing to a remote full model available. You need to be more explicit and can accomplish the same task through multiple iterative instructions. CAP MCP dies in nonsense loops within Cline.

If you have enough effort and time, maybe using a Modelfile including chat templates and system prompts and custom parameters could make possible to rebuild one of the Qwen models to instruct it. Thus Cline will not die using the model with its common symptoms using Local LLMs.

SAPDEV.EU
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.