MuleSoft-Platform-Architect-I Exam Questions

Total 152 Questions


Last Updated On : 17-Feb-2025



Preparing with MuleSoft-Platform-Architect-I practice test is essential to ensure success on the exam. This Salesforce test allows you to familiarize yourself with the MuleSoft-Platform-Architect-I exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification exam on your first attempt.

What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?


A. A decrease in the number of connections within the application network supporting the business process


B. A higher number of discoverable API-related assets in the application network


C. A better response time for the end user as a result of the APIs being smaller in scope and complexity


D. An overall tower usage of resources because each fine-grained API consumes less resources





B.
  A higher number of discoverable API-related assets in the application network

Explanation

Correct Answer: A higher number of discoverable API-related assets in the application network.

*****************************************

>> We do NOT get faster response times in fine-grained approach when compared to coarse-grained approach.

>> In fact, we get faster response times from a network having coarse-grained APIs compared to a network having fine-grained APIs model. The reasons are below.

Fine-grained approach:

1. will have more APIs compared to coarse-grained

2. So, more orchestration needs to be done to achieve a functionality in business process.

3. Which means, lots of API calls to be made. So, more connections will needs to be established. So, obviously more hops, more network i/o, more number of integration points compared to coarse-grained approach where fewer APIs with bulk functionality embedded in them.

4. That is why, because of all these extra hops and added latencies, fine-grained approach will have bit more response times compared to coarse-grained.

5. Not only added latencies and connections, there will be more resources used up in finegrained approach due to more number of APIs.

That's why, fine-grained APIs are good in a way to expose more number of resuable assets in your network and make them discoverable. However, needs more maintenance, taking care of integration points, connections, resources with a little compromise w.r.t network hops and response times.

A REST API is being designed to implement a Mule application. What standard interface definition language can be used to define REST APIs?


A. Web Service Definition Language(WSDL)


B. OpenAPI Specification (OAS)


C. YAML


D. AsyncAPI Specification





B.
  OpenAPI Specification (OAS)

A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?


A. Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore


B. Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%


C. Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers


D. Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%





D.
  Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

Explanation

Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only "sometimes" occasionally when there is spike in the number of orders coming in.

So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those "occasional" times the resources are idle and wasted.

We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.

Here, we need to take two things into consideration:

1. CPU

2. Order Submission Rate to JMS Queue

>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.

>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.

>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.

Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

A company wants to move its Mule API implementations into production as quickly as possible. To protect access to all Mule application data and metadata, the company requires that all Mule applications be deployed to the company's customer-hosted infrastructure within the corporate firewall. What combination of runtime plane and control plane options meets these project lifecycle goals?


A. Manually provisioned customer-hosted runtime plane and customer-hosted control plane


B. MuleSoft-hosted runtime plane and customer-hosted control plane


C. Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane


D. iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane





A.
  Manually provisioned customer-hosted runtime plane and customer-hosted control plane

Explanation

Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted control plane

***************************************** There are two key factors that are to be taken into consideration from the scenario given in the question.

>> Company requires both data and metadata to be resided within the corporate firewall

>> Company would like to go with customer-hosted infrastructure.

Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.

Application data can be controlled inside firewall by having Mule Runtimes on customer hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the control plane required atleast some minimum level of metadata to be sent outside the corporate firewall.

As the customer requirement is pretty clear about the data and metadata both to be within the corporate firewall, even though customer wants to move to production as quickly as possible, unfortunately due to the nature of their security requirements, they have no other option but to go with manually provisioned customer-hosted runtime plane and customerhosted control plane.

An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the API's public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?


A. The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run


B. The API producer should be contacted to understand the change to existing functionality


C. The API producer should be requested to run the old version in parallel with the new one


D. The API client code ONLY needs to be changed if it needs to take advantage of new features





D.
  The API client code ONLY needs to be changed if it needs to take advantage of new features

Reference: https://docs.mulesoft.com/exchange/to-change-raml-version

What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?


A. The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance


B. The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance


C. The API policy Is defined in API Manager and then automatically applied to ALL API instances


D. The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment





B.
  The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance

Explanation

Correct Answer: The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance.

***************************************** >> Once our API specifications are ready and published to Exchange, we need to visit API Manager and register an API instance for each API.

>> API Manager is the place where management of API aspects takes place like addressing NFRs by enforcing policies on them.

>> We can create multiple instances for a same API and manage them differently for different purposes.

>> One instance can have a set of API policies applied and another instance of same API can have different set of policies applied for some other purpose. >> These APIs and their instances are defined PER environment basis. So, one need to manage them seperately in each environment.

>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets promoted when promoting to higher environments using platform feature. But this is optional only. Still one can change them per environment basis if they have to.

>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT enforce API policies in Runtime Manager. We would need to do that via API Manager only for a cherry picked instance in an environment.

So, based on these facts, right statement in the given choices is - "The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance".

Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept

What API policy would LEAST likely be applied to a Process API?


A. Custom circuit breaker


B. Client ID enforcement


C. Rate limiting


D. JSON threat protection





D.
  JSON threat protection

Explanation

Correct Answer: JSON threat protection

*****************************************

Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any policy can be applied on any layer API. However, context should also be considered properly before blindly applying the policies on APIs.

That is why, this question asked for a policy that would LEAST likely be applied to a Process API.

From the given options:

>> All policies except "JSON threat protection" can be applied without hesitation to the APIs in Process tier.

>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious JSON payload coming from external API clients. This covers more of a security aspect by trying to avoid possibly malicious and harmful JSON payloads from external clients calling experience APIs.

As external API clients are NEVER allowed to call Process APIs directly and also these kind of malicious and harmful JSON payloads are always stopped at experience API layer only using this policy, it is LEAST LIKELY that this same policy is again applied on Process Layer API.

Reference: https://docs.mulesoft.com/api-manager/2.x/policy-mule3-provided-policies

What is typically NOT a function of the APIs created within the framework called API-led connectivity?


A. They provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.


B. They allow for innovation at the user Interface level by consuming the underlying assets without being aware of how data Is being extracted from backend systems.


C. They reduce the dependency on the underlying backend systems by helping unlock data from backend systems In a reusable and consumable way.


D. They can compose data from various sources and combine them with orchestration logic to create higher level value.





A.
  They provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.

Explanation

Correct Answer: They provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.

***************************************** In API-led connectivity,

>> Experience APIs - allow for innovation at the user interface level by consuming the underlying assets without being aware of how data is being extracted from backend systems.

>> Process APIs - compose data from various sources and combine them with orchestration logic to create higher level value

>> System APIs - reduce the dependency on the underlying backend systems by helping unlock data from backend systems in a reusable and consumable way.

However, they NEVER promise that they provide an additional layer of resilience on top of the underlying backend system, thereby insulating clients from extended failure of these systems.

https://dzone.com/articles/api-led-connectivity-with-mule

When using CloudHub with the Shared Load Balancer, what is managed EXCLUSIVELY by the API implementation (the Mule application) and NOT by Anypoint Platform?


A. The assignment of each HTTP request to a particular CloudHub worker


B. The logging configuration that enables log entries to be visible in Runtime Manager


C. The SSL certificates used by the API implementation to expose HTTPS endpoints


D. The number of DNS entries allocated to the API implementation





C.
  The SSL certificates used by the API implementation to expose HTTPS endpoints

Explanation

Correct Answer: The SSL certificates used by the API implementation to expose HTTPS endpoints

*****************************************

>> The assignment of each HTTP request to a particular CloudHub worker is taken care by Anypoint Platform itself. We need not manage it explicitly in the API implementation and in fact we CANNOT manage it in the API implementation.

>> The logging configuration that enables log entries to be visible in Runtime Manager is ALWAYS managed in the API implementation and NOT just for SLB. So this is not something we do EXCLUSIVELY when using SLB.

>> We DO NOT manage the number of DNS entries allocated to the API implementation inside the code. Anypoint Platform takes care of this.

It is the SSL certificates used by the API implementation to expose HTTPS endpoints that is to be managed EXCLUSIVELY by the API implementation. Anypoint Platform does NOT do this when using SLBs.

What is most likely NOT a characteristic of an integration test for a REST API implementation?


A. The test needs all source and/or target systems configured and accessible


B. The test runs immediately after the Mule application has been compiled and packaged


C. The test is triggered by an external HTTP request


D. The test prepares a known request payload and validates the response payload





B.
  The test runs immediately after the Mule application has been compiled and packaged

Explanation

Correct Answer: The test runs immediately after the Mule application has been compiled and packaged

***************************************** >> Integration tests are the last layer of tests we need to add to be fully covered.

>> These tests actually run against Mule running with your full configuration in place and are tested from external source as they work in PROD.

>> These tests exercise the application as a whole with actual transports enabled. So, external systems are affected when these tests run. So, these tests do NOT run immediately after the Mule application has been compiled and packaged.

FYI... Unit Tests are the one that run immediately after the Mule application has been compiled and packaged.

Reference:

https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies#integrationtesting


Page 6 out of 16 Pages
Previous