MuleSoft-Integration-Architect-I Exam Questions

Total 268 Questions


Last Updated On : 16-Jan-2025

What is required before an API implemented using the components of Anypoint Platform can be managed and governed (by applying API policies) on Anypoint Platform?


A. The API must be published to Anypoint Exchange and a corresponding API instance ID must be obtained from API Manager to be used in the API implementation


B. The API implementation source code must be committed to a source control management system (such as GitHub)


C. A RAML definition of the API must be created in API designer so it can then be published to Anypoint Exchange


D. The API must be shared with the potential developers through an API portal so API consumers can interact with the API





A.
  The API must be published to Anypoint Exchange and a corresponding API instance ID must be obtained from API Manager to be used in the API implementation

Explanation

Context of the question is about managing and governing mule applications deployed on Anypoint platform.

Anypoint API Manager (API Manager) is a component of Anypoint Platform that enables you to manage, govern, and secure APIs. It leverages the runtime capabilities of API Gateway and Anypoint Service Mesh, both of which enforce policies, collect and track analytics data, manage proxies, provide encryption and authentication, and manage applications.

Mule Ref Doc : https://docs.mulesoft.com/api-manager/2.x/getting-started-proxy

Reference: [Reference: https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept, ]

A Mule application is running on a customer-hosted Mule runtime in an organization's network. The Mule application acts as a producer of asynchronous Mule events. Each Mule event must be broadcast to all interested external consumers outside the Mule application. The Mule events should be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery in less frequent failure scenarios.

The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some external event consumers are within the organizational network, while others are located outside the firewall.

What Anypoint Platform service is most idiomatic (used for its intended purpose) for publishing these Mule events to all external consumers while addressing the desired reliability goals?


A. CloudHub VM queues


B. Anypoint MQ


C. Anypoint Exchange


D. CloudHub Shared Load Balancer





B.
  Anypoint MQ

Explanation:

Set the Anypoint MQ connector operation to publish or consume messages, or to accept (ACK) or not accept (NACK) a message.

Reference: [Reference: https://docs.mulesoft.com/mq/, , , ]

A mule application is required to periodically process large data set from a back-end database to Salesforce CRM using batch job scope configured properly process the higher rate of records. The application is deployed to two cloudhub workers with no persistence queues enabled. What is the consequence if the worker crashes during records processing?


A. Remaining records will be processed by a new replacement worker


B. Remaining records be processed by second worker


C. Remaining records will be left and processed


D. All the records will be processed from scratch by the second worker leading to duplicate processing





D.
  All the records will be processed from scratch by the second worker leading to duplicate processing

Explanation:

When a Mule application uses batch job scope to process large datasets and is deployed on multiple CloudHub workers without persistence queues enabled, the following scenario occurs if a worker crashes:

Batch Job Scope: Batch jobs are designed to handle large datasets by splitting the work into records and processing them in parallel.

Non-Persistent Queues: When persistence is not enabled, the state of the batch processing is not stored persistently. This means that if a worker crashes, the state of the in-progress batch job is lost.

Worker Crash Consequence:

This behavior can cause issues such as duplicate data in Salesforce CRM and inefficiencies in processing.

References

MuleSoft Batch Processing

MuleSoft CloudHub Workers

A company is designing an integration Mule application to process orders by submitting them to a back-end system for offline processing. Each order will be received by the Mule application through an HTTP5 POST and must be acknowledged immediately.

Once acknowledged the order will be submitted to a back-end system. Orders that cannot be successfully submitted due to the rejections from the back-end system will need to be processed manually (outside the banking system).

The mule application will be deployed to a customer hosted runtime and will be able to use an existing ActiveMQ broker if needed. The ActiveMQ broker is located inside the organization's firewall. The back-end system has a track record of unreliability due to both minor network connectivity issues and longer outages.

Which combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the back-end system while supporting but minimizing manual order processing?


A. One or more On Error scopes to assist calling the back-end system An Untill successful scope containing VM components for long retries A persistent dead-letter VM queue configure in Cloud hub


B. An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing


C. One or more on-Error scopes to assist calling the back-end system one or more ActiveMQ long-retry queues A persistent dead-letter Object store configuration in the CloudHub object store service


D. A batch job scope to call the back in system An Untill successful scope containing Object Store components for long retries. A dead-letter object store configured in the Mule application





B.
  An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing

Explanation:

To design an integration Mule application that processes orders and ensures reliability even with an unreliable back-end system, the following components and ActiveMQ queues should be used: Until Successful Scope: This scope ensures that the Mule application will continue trying to submit the order to the back-end system until it succeeds or reaches a specified retry limit. This helps in handling transient network issues or minor outages of the back-end system. ActiveMQ Long-Retry Queues: By placing the orders in long-retry queues, the application can manage retries over an extended period. This is particularly useful when the back-end system experiences longer outages. The ActiveMQ broker, located within the organization’s firewall, can reliably handle these queues.

ActiveMQ Dead-Letter Queues: Orders that cannot be successfully submitted after all retry attempts should be moved to dead-letter queues. This allows for manual processing of these orders. The dead-letter queue ensures that no orders are lost and provides a clear mechanism for handling failed submissions.

Implementation Steps:

HTTP Listener: Set up an HTTP listener to receive incoming orders.

Immediate Acknowledgment: Immediately acknowledge the receipt of the order to the client. Until Successful Scope: Use the Until Successful scope to attempt submitting the order to the back-end system. Configure retry intervals and limits.

Long-Retry Queues: Configure ActiveMQ long-retry queues to manage retries.

Dead-Letter Queues: Set up ActiveMQ dead-letter queues for orders that fail after maximum retry attempts, allowing for manual intervention.

This approach ensures that the system can handle temporary and prolonged back-end outages while minimizing manual processing.

References:

MuleSoft Documentation on Until Successful Scope: https://docs.mulesoft.com/mule-runtime/4.3/until-successful-scope

ActiveMQ Documentation: https://activemq.apache.org/

How should the developer update the logging configuration in order to enable this package specific debugging?


A. In Anypoint Monitoring, define a logging search query with class property set to org.apache.cxf and level set to DEBUG


B. In the Mule application's log4j2.xml file, add an AsyncLogger element with name property set to org.apache.cxf and level set to DEBUG, then redeploy the Mule application in the CloudHub production environment


C. In the Mule application's log4j2.xmI file, change the root logger's level property to DEBUG, then redeploy the Mule application to the CloudHub production environment


D. In Anypoint Runtime Manager, in the Deployed Application Properties tab for the Mule application, add a line item with DEBUG level for package org.apache.cxf and apply the changes





A.
  In Anypoint Monitoring, define a logging search query with class property set to org.apache.cxf and level set to DEBUG

Explanation:

To enable package-specific debugging for the org.apache.cxf package, you need to update the logging configuration in the Mule application's log4j2.xml file. The steps are as follows:

Open the log4j2.xml file in your Mule application.

Add an AsyncLogger element with the name property set to org.apache.cxf and the level set to DEBUG. This configuration specifies that only the logs from the org.apache.cxf package should be logged at the DEBUG level.

Save the changes to the log4j2.xml file.

Redeploy the updated Mule application to the CloudHub production environment to apply the new logging configuration.

This approach ensures that only the specified package's logging level is changed to DEBUG, minimizing the potential performance impact on the application.

References

MuleSoft Documentation on Configuring Logging

Log4j2 Configuration Guide

A leading eCommerce giant will use MuleSoft APIs on Runtime Fabric (RTF) to process customer orders. Some customer-sensitive information, such as credit card information, is required in request payloads or is included in response payloads in some of the APIs. Other API requests and responses are not authorized to access some of this customer-sensitive information but have been implemented to validate and transform based on the structure and format of this customer-sensitive information (such as account IDs, phone numbers, and postal codes).

What approach configures an API gateway to hide sensitive data exchanged between API consumers and API implementations, but can convert tokenized fields back to their original value for other API requests or responses, without having to recode the API implementations?

Later, the project team requires all API specifications to be augmented with an additional non-functional requirement (NFR) to protect the backend services from a high rate of requests, according to defined service-level

agreements (SLAs). The NFR's SLAs are based on a new tiered subscription level "Gold", "Silver", or "Platinum" that must be tied to a new parameter that is being added to the Accounts object in their enterprise data model.

Following MuleSoft's recommended best practices, how should the project team now convey the necessary non-functional requirement to stakeholders?


A. Create and deploy API proxies in API Manager for the NFR, change the baseurl in each API specification to the corresponding API proxy implementation endpoint, and publish each modified API specification to Exchange


B. Update each API specification with comments about the NFR's SLAs and publish each modified API specification to Exchange


C. Update each API specification with a shared RAML fragment required to implement the NFR and publish the RAML fragment and each modified API specification to Exchange


D. Create a shared RAML fragment required to implement the NFR, list each API implementation endpoint in the RAML fragment, and publish the RAML fragment to Exchange





C.
  Update each API specification with a shared RAML fragment required to implement the NFR and publish the RAML fragment and each modified API specification to Exchange

Explanation:

To convey the necessary non-functional requirement (NFR) related to protecting backend services from a high rate of requests according to SLAs, the following steps should be taken:

Create a Shared RAML Fragment: Develop a RAML fragment that defines the NFR, including the SLAs for different subscription levels ("Gold", "Silver", "Platinum"). This fragment should include the details on rate limiting and throttling based on the new parameter added to the Accounts object.

Update API Specifications: Integrate the shared RAML fragment into each API specification. This ensures that the NFR is consistently applied across all relevant APIs.

Publish to Exchange: Publish the updated API specifications and the shared RAML fragment to Anypoint Exchange. This makes the NFR visible and accessible to all stakeholders and developers, ensuring compliance and implementation consistency.

This approach ensures that the NFR is clearly communicated and applied uniformly across all API implementations.

References

MuleSoft Documentation on RAML and API Specifications

Best Practices for API Design and Documentation

An insurance provider is implementing Anypoint platform to manage its application infrastructure and is using the customer hosted runtime for its business due to certain financial requirements it must meet. It has built a number of synchronous API's and is currently hosting these on a mule runtime on one server

These applications make use of a number of components including heavy use of object stores and VM queues.

Business has grown rapidly in the last year and the insurance provider is starting to receive reports of reliability issues from its applications.

The DevOps team indicates that the API's are currently handling too many requests and this is over loading the server. The team has also mentioned that there is a significant downtime when the server is down for maintenance.

As an integration architect, which option would you suggest to mitigate these issues?


A. Add a load balancer and add additional servers in a server group configuration


B. Add a load balancer and add additional servers in a cluster configuration


C. Increase physical specifications of server CPU memory and network


D. Change applications by use an event-driven model





B.
  Add a load balancer and add additional servers in a cluster configuration

Explanation:

To address the reliability and scalability issues faced by the insurance provider, adding a load balancer and configuring additional servers in a cluster configuration is the optimal solution. Here's why:

Load Balancing: Implementing a load balancer will help distribute incoming API requests evenly across multiple servers. This prevents any single server from becoming a bottleneck, thereby improving the overall performance and reliability of the system.

Cluster Configuration: By setting up a cluster configuration, you ensure that multiple servers work together as a single unit. This provides several benefits:

Maintenance: With a cluster configuration, servers can be taken offline for maintenance one at a time without affecting the overall availability of the applications, as the load balancer can redirect traffic to the remaining servers.

VM Queues and Object Stores: In a clustered environment, the use of VM queues and object stores can be more efficiently managed as these resources are distributed across multiple servers, reducing the risk of contention and improving throughput.

References:

MuleSoft Documentation on Clustering: https://docs.mulesoft.com/mule-runtime/4.3/clustering Best Practices for Scaling Mule Applications: https://blogs.mulesoft.com/dev/mule-dev/mule-4-scaling-applications/

A Mule application uses APIkit for SOAP to implement a SOAP web service. The Mule application has been deployed to a CloudHub worker in a testing environment. The integration testing team wants to use a SOAP client to perform Integration testing. To carry out the integration tests, the integration team must obtain the interface definition for the SOAP web service. What is the most idiomatic (used for its intended purpose) way for the integration testing team to obtain the interface definition for the deployed SOAP web service in order to perform integration testing with the SOAP client?


A. Retrieve the OpenAPI Specification file(s) from API Manager


B. Retrieve the WSDL file(s) from the deployed Mule application


C. Retrieve the RAML file(s) from the deployed Mule application


D. Retrieve the XML file(s) from Runtime Manager





D.
  Retrieve the XML file(s) from Runtime Manager

Explanation:
Reference: [Reference: https://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/integration-testing.html , , , ]

A manufacturing company is planning to deploy Mule applications to its own Azure Kubernetes Service infrastructure.

The organization wants to make the Mule applications more available and robust by deploying each Mule application to an isolated Mule runtime in a Docker container while managing all the Mule applications from the MuleSoft-hosted control plane.

What is the most idiomatic (used for its intended purpose) choice of runtime plane to meet these organizational requirements?


A. Anypoint Platform Private Cloud Edition


B. Anypoint Runtime Fabric


C. CloudHub


D. Anypoint Service Mesh





B.
  Anypoint Runtime Fabric

Explanation:

Reference: [Reference: https://blogs.mulesoft.com/dev-guides/how-to-tutorials/anypoint-runtime-fabric/, ]

Organization wants to achieve high availability goal for Mule applications in customer hosted runtime plane. Due to the complexity involved, data cannot be shared among of different instances of same Mule application. What option best suits to this requirement considering high availability is very much critical to the organization?


A. The cluster can be configured


B. Use third party product to implement load balancer


C. High availability can be achieved only in CloudHub


D. Use persistent object store





B.
  Use third party product to implement load balancer

Explanation

High availability is about up-time of your application

A) High availability can be achieved only in CloudHub isn't correct statement. It can be achieved in customer hosted runtime planes as well

B) An object store is a facility for storing objects in or across Mule applications. Mule runtime engine (Mule) uses object stores to persist data for eventual retrieval. It can be used for disaster recovery but not for High Availability. Using object store can't guarantee that all instances won't go down at once. So not an appropriate choice.

Reference: [Reference: https://docs.mulesoft.com/mule-runtime/4.3/mule-object-stores, C) High availability can be achieved by below two models for on-premise MuleSoft implementations., 1) Mule Clustering – Where multiple Mule servers are available within the same cluster environment and the routing of requests will be done by the load balancer. A cluster is a set of up to eight servers that act as a single deployment target and high-availability processing unit. Application instances in a cluster are aware of each other, share common information, and synchronize statuses. If one server fails, another server takes over processing applications. A cluster can run multiple applications. ( refer left half of the diagram), In given scenario, it's mentioned that 'data cannot be shared among of different instances'. So this is not a correct choice., Reference: https://docs.mulesoft.com/runtime-manager/cluster-about, 2) Load balanced standalone Mule instances – The high availability can be achieved even without cluster, with the usage of third party load balancer pointing requests to different Mule servers. This approach does not share or synchronize data between Mule runtimes. Also high availability achieved as load balanced algorithms can be implemented using external load balancer. ( refer right half of the diagram), Graphical user interface, diagram, application Description automatically generated, ]


Page 4 out of 27 Pages
Previous