MuleSoft-Platform-Architect-I Exam Questions

Total 97 Questions


Last Updated On : 16-Jan-2025

A company uses a hybrid Anypoint Platform deployment model that combines the EU control plane with customer-hosted Mule runtimes. After successfully testing a Mule API implementation in the Staging environment, the Mule API implementation is set with environment-specific properties and must be promoted to the Production environment. What is a way that MuleSoft recommends to configure the Mule API implementation and automate its promotion to the Production environment?


A. Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.


B. Modify the Mule API implementation's properties in the API Manager Properties tab, then promote the Mule API implementation to the Production environment using API Manager


C. Modify the Mule API implementation's properties in Anypoint Exchange, then promote the Mule API implementation to the Production environment using Runtime Manager


D. Use an API policy to change properties in the Mule API implementation deployed to the Staging environment and another API policy to deploy the Mule API implementation to the Production environment





A.
  Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.

Explanation

Correct Answer: Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIs

*****************************************

Anypoint Exchange is for asset discovery and documentation. It has got no provision to modify the properties of Mule API implementations at all.

 API Manager is for managing API instances, their contracts, policies and SLAs. It has also got no provision to modify the properties of API implementations.

API policies are to address Non-functional requirements of APIs and has again got no provision to modify the properties of API implementations.

So, the right way and recommended way to do this as part of development practice is to bundle properties files for each environment into the Mule API implementation and just point and refer to respective file per environment.

What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?


A. A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design


B. The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region


C. The FQDNs are determined by the application name, but can be modified by an administrator after deployment


D. The FQDNs are determined by both the application name and the Anypoint Platform organization





B.
  The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region

Explanation

Correct Answer: The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region

*****************************************

 When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.

It does NOT matter what region the app is being deployed to.

Although it is fact and true that the generated FQDN will have the region included in it (Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.

Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers.

An organization has created an API-led architecture that uses various API layers to integrate mobile clients with a backend system. The backend system consists of a number of specialized components and can be accessed via a REST API. The process and experience APIs share the same bounded-context model that is different from the backend data model. What additional canonical models, bounded-context models, or anti-corruption layers are best added to this architecture to help process data consumed from the backend system?


A. Create a bounded-context model for every layer and overlap them when the boundary contexts overlap, letting API developers know about the differences between upstream and downstream data models


B. Create a canonical model that combines the backend and API-led models to simplify and unify data models, and minimize data transformations.


C. Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers


D. Create an anti-corruption layer for every API to perform transformation for every data model to match each other, and let data simply travel between APIs to avoid the complexity and overhead of building canonical models





C.
  Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers

Explanation

Correct Answer: Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers

*****************************************

 Canonical models are not an option here as the organization has already put in efforts and created bounded-context models for Experience and Process APIs.

Anti-corruption layers for ALL APIs is unnecessary and invalid because it is mentioned that experience and process APIs share same bounded-context model. It is just the System layer APIs that need to choose their approach now.

So, having an anti-corruption layer just between the process and system layers will work well. Also to speed up the approach, system APIs can mimic the backend system data model.

Which of the following best fits the definition of API-led connectivity?


A. API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization


B. API-led connectivity is a 3-layered architecture covering Experience, Process and System layers


C. API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs





A.
  API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization

Explanation

Correct Answer: API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization.

*****************************************

Reference: [Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/, , ]

Once an API Implementation is ready and the API is registered on API Manager, who should request the access to the API on Anypoint Exchange?


A. None


B. Both


C. API Client


D. API Consumer





D.
  API Consumer

Explanation

Correct Answer: API Consumer

*****************************************

API clients are piece of code or programs that use the client credentials of API consumer but does not directly interact with Anypoint Exchange to get the access

API consumer is the one who should get registered and request access to API and then API client needs to use those client credentials to hit the APIs

So, API consumer is the one who needs to request access on the API from Anypoint Exchange

What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?


A. Single sign-on is required to sign in to Anypoint Platform


B. The application network must include System APIs that interact with the Identity Provider


C. To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider


D. APIs managed by Anypoint Platform must be protected by SAML 2.0 policies





C.
  To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider

Explanation:

https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html

Explanation

Correct Answer: To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider

*****************************************

It is NOT necessary that single sign-on is required to sign in to Anypoint Platform because we are using an external Identity Provider for Client Management

It is NOT necessary that all APIs managed by Anypoint Platform must be protected by SAML 2.0 policies because we are using an external Identity Provider for Client Management

Not TRUE that the application network must include System APIs that interact with the Identity Provider because we are using an external Identity Provider for Client Management

Only TRUE statement in the given options is - "To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider"

References:

https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policy

https://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/

A REST API is being designed to implement a Mule application. What standard interface definition language can be used to define REST APIs?


A. Web Service Definition Language(WSDL)


B. OpenAPI Specification (OAS)


C. YAML


D. AsyncAPI Specification





B.
  OpenAPI Specification (OAS)

How are an API implementation, API client, and API consumer combined to invoke and process an API?


A. The API consumer creates an API implementation, which receives API invocations from an API such that they are processed for an API client


B. The API client creates an API consumer, which receives API invocations from an API such that they are processed for an API implementation


C. The ApI consumer creates an API client, which sends API invocations to an API such that they are processed by an API implementation


D. The ApI client creates an API consumer, which sends API invocations to an API such that they are processed by an API implementation





C.
  The ApI consumer creates an API client, which sends API invocations to an API such that they are processed by an API implementation

Explanation

Correct Answer: The API consumer creates an API client, which sends API invocations to an API such that they are processed by an API implementation

*****************************************

Terminology:

API Client - It is a piece of code or program the is written to invoke an API

API Consumer - An owner/entity who owns the API Client. API Consumers write API clients.

API - The provider of the API functionality. Typically an API Instance on API Manager where they are managed and operated.

API Implementation - The actual piece of code written by API provider where the functionality of the API is implemented. Typically, these are Mule Applications running on Runtime Manager.

A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?


A. Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore


B. Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%


C. Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers


D. Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%





D.
  Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

Explanation

Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

*****************************************

The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only "sometimes" occasionally when there is spike in the number of orders coming in.

So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those "occasional" times the resources are idle and wasted.

We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.

Here, we need to take two things into consideration:

1. CPU

2. Order Submission Rate to JMS Queue

From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.

However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.

But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.

Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?


A. IPwhitelist


B. SLA-based rate limiting


C. Auth 2 token enforcement


D. Client ID enforcement





B.
  SLA-based rate limiting

Explanation

Correct Answer: SLA-based rate limiting

*****************************************

Client Id enforement policy is a "Compliance" related NFR and does not help in maintaining the "Quality of Service (QoS)". It CANNOT and NOT meant for protecting the backend systems from scalability challenges.

IP Whitelisting and OAuth 2.0 token enforcement are "Security" related NFRs and again does not help in maintaining the "Quality of Service (QoS)". They CANNOT and are NOT meant for protecting the backend systems from scalability challenges.

Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are "Quality of Service (QOS)" related NFRs and are meant to help in protecting the backend systems from getting overloaded.

https://dzone.com/articles/how-to-secure-apis


Page 1 out of 10 Pages

About Salesforce MuleSoft Platform Architect I Exam


Salesforce MuleSoft Platform Architect I Exam is a certification designed for architects who specialize in designing enterprise-level integrations and API solutions using MuleSoft Anypoint Platform.

Key Facts:

Exam Questions: 60
Type of Questions: MCQs
Exam Time: 90 minutes
Exam Price: $375
Passing Score: 70%

Key Topics:

1. API Design and Implementation: 20% of exam
2. Application Networks: 20% of exam
3. Anypoint Platform Basics: 15% of exam
4. Security and Governance: 15% of exam
5. Performance Optimization: 15% of exam
6. Deployment and Management: 10% of exam
7. Troubleshooting: 5% of exam

Benefits of Salesforce MuleSoft Platform Architect I Certification


Professional Recognition: Demonstrates your expertise in MuleSoft platform architecture.
Career Advancement: Opens doors to senior architecture roles and positions in integration-focused organizations.
Increased Earning Potential: Certified professionals command higher salaries and better job opportunities.
Enterprise Expertise:Positions you as a trusted advisor for large-scale integration projects.