The article addresses the problem of making REST-based calls from AEM to an external system and the best way to handle HTTP requests.
Requirement:
The article discusses the implementation of an OSGi-based REST service in AEM that integrates with an external system using the HTTP Client factory. The author provides detailed steps on how to create a new Apache Closable HTTP Client, prepare request configuration, pool HTTP connections, and use default headers and keepAlive strategy to execute requests.
Create an OSGi based REST service to integrate AEM with the external system, and also provide config to provide endpoint options and client factory configurations.
Introduction:
As we all know AEM is REST based Web application, however, is there a way to integrate OSGi based service to make calls to the external system.
After going through the ACS commons based HTTP Client factory, I created a more feature-friendly and rich HTTP client factory.
Create HTTPClientFactory Service Interface:
This service provides the implementation for most of the basic HTTP REST based operations like GET, PUT, POST, and DELETE operations.
package com.example.core.services;
import org.apache.http.client.fluent.Executor;
import org.apache.http.client.fluent.Request;
/**
* Factory for building pre-configured HttpClient Fluent Executor and Request objects
* based a configure host, port and (optionally) username/password.
* Factories will generally be accessed by service lookup using the factory.name property.
*/
public interface HttpClientFactory {
/**
* Get the configured Executor object from this factory.
*
* @return an Executor object
*/
Executor getExecutor();
/**
* Create a GET request using the base hostname and port defined in the factory configuration.
*
* @param partialUrl the portion of the URL after the port (and slash)
*
* @return a fluent Request object
*/
Request get(String partialUrl);
/**
* Create a PUT request using the base hostname and port defined in the factory configuration.
*
* @param partialUrl the portion of the URL after the port (and slash)
*
* @return a fluent Request object
*/
Request put(String partialUrl);
/**
* Create a POST request using the base hostname and port defined in the factory configuration.
*
* @param partialUrl the portion of the URL after the port (and slash)
*
* @return a fluent Request object
*/
Request post(String partialUrl);
/**
* Create a DELETE request using the base hostname and port defined in the factory configuration.
*
* @param partialUrl the portion of the URL after the port (and slash)
*
* @return a fluent Request object
*/
Request delete(String partialUrl);
/**
* Create a OPTIONS request using the base hostname and port defined in the factory configuration.
*
* @param partialUrl the portion of the URL after the port (and slash)
*
* @return a fluent Request object
*/
Request options(String partialUrl);
/**
* Get External URI type is form the factory configuration.
*
* @return External URI Type
*/
String getExternalURIType();
/**
* Get apiStoreLocatorHostName URI type is form the factory configuration.
*
* @return API StoreLocatorHost
*/
String getApiStoreLocatorHostName();
Request postWithAbsolute(String absolutelUrl);
}
Create HTTPClientFactoryConfig:
Add the required attributes to create the HTTPCLientFactory.
package com.example.services.config;
import org.osgi.service.metatype.annotations.AttributeDefinition;
import org.osgi.service.metatype.annotations.AttributeType;
import org.osgi.service.metatype.annotations.ObjectClassDefinition;
import com.example.constants.Constants;
@ObjectClassDefinition(name = "Http Client API Configuration", description = "Http Client API Configuration")
public @interface HttpClientFactoryConfig {
@AttributeDefinition(name = "API Host Name", description = "API host name, e.g. https://example.com", type = AttributeType.STRING)
String apiHostName() default Constants.DEFAULT_API_HOST_NAME;
@AttributeDefinition(name = "API URI Type Path", description = "API URI type path, e.g. /services/int/v2", type = AttributeType.STRING)
String uriType() default Constants.DEFAULT_API_URI_TYPE_PATH;
@AttributeDefinition(name = "API URI Type Path", description = "API URI type path, e.g. /services/ext/v2", type = AttributeType.STRING)
String uriExternalType() default Constants.DEFAULT_API_URI_EXTERNAL_TYPE_PATH;
@AttributeDefinition(name = "Relaxed SSL", description = "Defines if self-certified certificates should be allowed to SSL transport", type = AttributeType.BOOLEAN)
boolean relaxedSSL() default Constants.DEFAULT_RELAXED_SSL;
@AttributeDefinition(name = "Store Locator API Host Name", description = "Store Locator API host name, e.g. https://example.com", type = AttributeType.STRING)
String apiStoreLocatorHostName() default Constants.DEFAULT_STORE_LOCATOR_API_HOST_NAME;
@AttributeDefinition(name = "Maximum number of total open connections", description = "Set maximum number of total open connections, default 5", type = AttributeType.INTEGER)
int maxTotalOpenConnections() default Constants.DEFAULT_MAXIMUM_TOTAL_OPEN_CONNECTION;
@AttributeDefinition(name = "Maximum number of concurrent connections per route", description = "Set the maximum number of concurrent connections per route, default 5", type = AttributeType.INTEGER)
int maxConcurrentConnectionPerRoute() default Constants.DEFAULT_MAXIMUM_CONCURRENT_CONNECTION_PER_ROUTE;
@AttributeDefinition(name = "Default Keep alive connection in seconds", description = "Default Keep alive connection in seconds, default value is 1", type = AttributeType.LONG)
int defaultKeepAliveconnection() default Constants.DEFAULT_KEEP_ALIVE_CONNECTION;
@AttributeDefinition(name = "Default connection timeout in seconds", description = "Default connection timout in seconds, default value is 30", type = AttributeType.LONG)
long defaultConnectionTimeout() default Constants.DEFAULT_CONNECTION_TIMEOUT;
@AttributeDefinition(name = "Default socket timeout in seconds", description = "Default socket timeout in seconds, default value is 30", type = AttributeType.LONG)
long defaultSocketTimeout() default Constants.DEFAULT_SOCKET_TIMEOUT;
@AttributeDefinition(name = "Default connection request timeout in seconds", description = "Default connection request timeout in seconds, default value is 30", type = AttributeType.LONG)
long defaultConnectionRequestTimeout() default Constants.DEFAULT_CONNECTION_REQUEST_TIMEOUT;
}
Create HttpClientFactoryImpl Service implementation:
This provides the implementation class for HTTPClientFactory Service and during @Activate/@Modified we are trying to create a new Apache Closable HTTP Client using OSGi based HttpClientBuilderFactory.
HTTP client is like a dish, and you can taste it better if your recipe is great and if you prepare it well, before making calls to the external system.
Close all Connections:
Make sure to close the existing connection if any after bundle gets activated or modified
Preparing Request Configuration:
Create Request Config Object and set Connection timeout, socket timeout, and request timeout based on the service configurations
Pooling HTTP Connection:
PoolingHttpClientConnectionManager maintains a pool of HttpClientConnections and is able to service connection requests from multiple execution threads. Connections are pooled on a per route basis. A request for a route that already the manager has persistent connections for available in the pool will be serviced by leasing a connection from the pool rather than creating a brand new connection.
Hence set the max pool size and number default max per route (per endpoint)
Things to be aware of before pooling connection is, are you making HTTPS calls to the external system if yes? Then create an SSLConnectionSocketFactory with NOOP based verifier and add all the trusted certificates.
Default Keep Alive Strategy:
If the Keep-Alive header is not present in the response, HttpClient assumes the connection can be kept alive indefinitely. However, many HTTP servers in general use are configured to drop persistent connections after a certain period of inactivity to conserve system resources, often without informing the client. In case the default strategy turns out to be too optimistic, one may want to provide a custom keep-alive strategy.
HTTP Client Builder OSGi Service:
Get the reference to OSGi-based httpClientBuilderFactory service, prepare a new builder, set the request configuration, and add a connection manager with a pooling connection.
Add default headers and keepAlive strategy, so that we donโt have to create a new connection
Finally, create the HTTP Client out of this builder and set the client to Apache fluent Executor.
the fluent executor makes an arbitrary HttpClient instance and executes the request.
The article aims to explain the process of delegating OOTB components in AEM and customizing them by adding new parameters and methods.
Requirement:
This article is about delegating the Out Of The Box (OOTB) components of Adobe Experience Manager (AEM) using Lombok delegation. The author explains how to add new parameters and methods to the components using delegation. Specifically, the article covers how to delegate the OOTB image component, add custom logic to its existing methods, and introduce new methods such as getCredit() and getSizes().
Delegate OOTB image component and add custom logic on existing methods like getWidth(), getHeight(), and getSrcSet() and also add new methods getCredit() and getSizes()
Introduction:
Any field or no-argument method can be annotated with @Delegate to let Lombok generate delegate methods that forward the call to this field (or the result of invoking this method).
Lombok delegates all public methods of the field’s type (or method’s return type), as well as those of its supertypes except for all methods declared in java.lang.Object.
You can pass any number of classes into the @Delegate annotation’s types parameter. If you do that, then Lombok will delegate all public methods in those types (and their supertypes, except java.lang.Object) instead of looking at the field/method’s type.
All public non-Object methods that are part of the calculated type(s) are copied, whether or not you also wrote implementations for those methods. That would thus result in duplicate method errors. You can avoid these by using the @Delegate(excludes=SomeType.class) parameter to exclude all public methods in the excluded type(s), and their supertypes.
The below image shows how the Lombok delegation happens:
Lombok request delegation
Whenever we make a request to a class (for the Image Sling model) the Lombok delegation will copy all the public methods and exclude the methods which we want to override and provide the custom implementation to those methods.
Image Component Delegation
For our requirement, I am creating a class called ImageDelegate and implementing the Image component interface.
I will be making the call to the Image component using @self via ResourceSuperType and using @Delegate annotation I will be excluding some of the methods inside DelegationExclusion interface class
I will be adding custom code to the overridden methods and also I am able to introduce new methods getCredit(), getSizes()
When should you commit or save nodes using resource resolver, and is it advisable to save nodes inside a loop?
Requirement:
This article discusses best practices for committing and saving nodes using AEM (Adobe Experience Manager) Resource Resolver. It highlights the importance of proper synchronization when working with resources and the different methods that can be used to manage them. The article also provides examples of saving nodes inside and outside for loops and explains how to avoid updating nodes unnecessarily by validating property existence and values.
Save 50 nodes with some properties
Introduction:
The ResourceResolver defines the API which may be used to resolve Resource objects and work with such resources as creating, editing or updating them. The resource resolver is available to the request processing servlet through the SlingHttpServletRequest.getResourceResolver() method. A resource resolver can also be created through the ResourceResolverFactory service.
A ResourceResolver is generally not thread safe! As a consequence, an application that uses the resolver, its returned resources, and/or objects resulting from adapting either the resolver or a resource, must provide proper synchronization to ensure no more than one thread concurrently operate against a single resolver, resource, or resulting objects.
An algorithm is used to resolve and getResource and provide various methods to manage resources like:
Operation
Description
Create(Resource, String, Map)
for creating a new resource.
Delete(Resource)
to delete a resource.
Adaptable.adaptTo(Class)
allows to adapt a resource to a ModifiableValueMap to update a resource.
Move(String, String)
to move resources.
Copy(String, String)
to copy resources.
Commit()
commits all staged changes.
Revert()
reverts all staged changes.
All changes are transient and require committing them at the end
Hence as per API documentation, itโs better to stage all the changes before calling commit or revert.
But please make sure we are not trying to save millions of nodes at a time and also updating nodes takes more time compared to creating a new one as per the adapto conference showcase.
Hence check whether the node already has the property and value before you save it.
Resolution:
Saving resolver inside for loop
For our use case, I am using ResourceUtil.getOrCreateResource() for creating or getting the exiting node, and if it creating then it will be saving the node with default properties like jcr:primaryType = un:unstructured
Using ResourceUtil increases code readability and maintainability
Parameters: resolver – The resource resolver to use for the creation path – The full path to be created resourceProperties – The optional resource properties of the final resource to create intermediateResourceType – THe optional resource type of all intermediate resources autoCommit – If set to true, a commit is performed after each resource creation.
In the below example, I am creating a for loop, I am creating the node (resource) with default properties, and I am setting auto-commit as true. After creating the resource, I am adapting it to ModifiableValueMap and I will be adding a new property name and value as โproperty+indexโ and committing the resolver.
In the below example, I have to remove auto save as false and the rest of the code remains the same, but I am committing resolver outside for loop. By doing so, I can stage the resource resolver and commit it at last.
ResourceUtil would handle getting the existing resource instead of recreating it, but I would still be updating the resource and committing the changes, which is a costly process.
Better implementation with Validation
In order to avoid the updating of the node we could validate that the property exists and check the value and if and only if the resolver has changed, will commit.
public void saveNodes() {
try (ResourceResolver resourceResolver = resolverFactory.getServiceResourceResolver(
Collections.singletonMap(ResourceResolverFactory.SUBSERVICE, SERVICE_USER))) {
for (int index = 0; index <= 50; index++) {
@NotNull
Resource savedResource = ResourceUtil.getOrCreateResource(resourceResolver, "/content/" + index,
defualtNodeProperties, StringUtils.EMPTY, false);
ModifiableValueMap map = savedResource.adaptTo(ModifiableValueMap.class);
if (!map.containsKey("name") || !StringUtils.equals(map.get("name", StringUtils.EMPTY), "property" + index)) {
map.put("name", index);
}
}
if (resourceResolver.hasChanges()) {
resourceResolver.commit();
}
} catch (LoginException | PersistenceException e) {
LOGGER.error("Error Occured during Login", e.getMessage());
}
}
What is the recommended way to create a custom workflow process in AEM?
Requirement:
This article discusses the recommended approach to creating custom workflow processes in Adobe Experience Manager (AEM) for implementing business logic. The author outlines the steps for creating a custom workflow process, including creating a component service with WorkflowProcessclass and providing a process. label, and override the execute method. The article includes a code example to illustrate the process.
Create the custom Workflow Process to do some business logic
Introduction:
WorkflowProcess is the interface to be used for automatic workflow steps implemented in Java. Classes implementing this interface define Java based processes that can be attached to a WorkflowNode and executed by the workflow engine.
Create Component Service with WorkflowProcessclass and implement WorkflowProcess class
Provide process.label for all the custom workflow process
Override execute method
package com.mysite.core.workflows;
import org.apache.sling.api.resource.ResourceResolver;
import org.osgi.service.component.annotations.Component;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.adobe.granite.workflow.WorkflowException;
import com.adobe.granite.workflow.WorkflowSession;
import com.adobe.granite.workflow.exec.WorkItem;
import com.adobe.granite.workflow.exec.WorkflowProcess;
import com.adobe.granite.workflow.metadata.MetaDataMap;
/**
* This process tracks back the properties which are changed in the generated
* deck and updates its respective properties in all the assets.
*/
@Component(service = WorkflowProcess.class, property = { "process.label=Dynamic Deck Dynamo Write Back Process" })
public class CustomWorkflowProcess implements WorkflowProcess {
private static final Logger LOGGER = LoggerFactory.getLogger(CustomWorkflowProcess.class);
@Override
public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap)
throws WorkflowException {
ResourceResolver resourceResolver;
resourceResolver = workflowSession.adaptTo(ResourceResolver.class);
}
}
What is the Adobe recommended way to create a TransformerFactory in AEM?
Requirement:
This article provides best practices for creating Transformers in Adobe Experience Manager (AEM). It discusses the recommended way to create a TransformerFactory and outlines the steps to create a powerful mechanism that rewrites the output generated by the Sling rendering process. The article also includes sample code for creating a Component Service with TransformerFactory class and implementing a TransformerFactory class.
Create the Transformer to rewrite the output with powerful mechanisms
Introduction:
The TransformerFactory is a service that creates Transformers on demand. The created transformers form the middle part of the rewriter pipeline. The factories themselves are not chained but the resulting transformers are. On each pipeline call new instances are created. The factory is referenced using a service property named ‘pipeline. type’. Each factory should have a unique value for this property. With the optional property ‘pipeline. mode’ set to the value ‘global’ the transformer is used for each and every pipeline regardless of the actual configuration for this pipeline. All available global transformers with a service ranking below zero are chained right after the generator. All available global transformers with a service ranking higher or equal to zero are chained right before the serializer. Therefore the property “service.ranking” should be used for the factory in combination with ‘pipeline.mode’. To be compatible with possible future uses of the ‘pipeline.mode’ property, it should only be used with the value ‘global’.
This is a powerful mechanism that rewrites the output (typically HTML markup) generated by the Sling rendering process. It is part of the Apache Sling Rewriter module, which uses SAX event-based pipelines as shown here.
Every pipeline consists of three components, and each component has a corresponding Java interface and factory:
Create Component Service with TransformerFactory class and implement TransformerFactory class
Provide a pipeline for the action
Override createTransformer method
package com.mysite.core.filters;
import org.apache.sling.rewriter.Transformer;
import org.apache.sling.rewriter.TransformerFactory;
import org.osgi.service.component.annotations.Component;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* {@link TransformerFactory} defined to create new {@link ContentVariableTransformer} objects and pass in the reference to
* the service used to aggregate properties.
*/
@Component(service = TransformerFactory.class, property = {
"pipeline.type=ccvar-transformer"
})
public class ContentVariableTransformerFactory implements TransformerFactory {
private static final Logger LOG = LoggerFactory.getLogger(ContentVariableTransformerFactory.class);
@Override
public Transformer createTransformer() {
LOG.trace("Content Variable Transformer");
return new ContentVariableTransformer();
}
}
Create the transformer implementation and override the init() method
package com.mysite.core.filters;
import java.io.IOException;
import java.util.Map;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.rewriter.ProcessingComponentConfiguration;
import org.apache.sling.rewriter.ProcessingContext;
import org.xml.sax.Attributes;
import org.xml.sax.SAXException;
import com.adobe.acs.commons.rewriter.ContentHandlerBasedTransformer;
/**
* {@link org.apache.sling.rewriter.Transformer} used to process HTML requests and replace content tokens found in the
* rendered HTML.
*/
public class ContentVariableTransformer extends ContentHandlerBasedTransformer {
private Map<String, Object> contentVariableReplacements;
public ContentVariableTransformer() {
}
@Override
public void init(ProcessingContext processingContext, ProcessingComponentConfiguration processingComponentConfiguration) throws IOException {
SlingHttpServletRequest request = processingContext.getRequest();
}
public void startElement(String uri, String localName, String quaName, Attributes atts) throws SAXException {
}
public void characters(char[] ch, int start, int length) throws SAXException {
}
}
What are the recommended ways to create filters in AEM?
Requirement:
This article discusses the best practices for creating filters in Adobe Experience Manager (AEM). Filters are objects that perform filtering tasks on requests or responses from a resource. The article covers creating a component service with a filter class, providing a service description, ranking, and vendor for the filter, and overriding the doFilter method to chain requests and add logs. Create the Filters to chain the requests and add logs
Introduction:
A filter is an object that performs filtering tasks on either the request to a resource (a servlet or static content), the response from a resource, or both.
Filters perform filtering in the doFilter method. Every Filter has access to a FilterConfig object from which it can obtain its initialization parameters, and a reference to the ServletContext which it can use, for example, to load resources needed for filtering tasks.
Filters are configured in the deployment descriptor of a web application.
Create a Component Service with a Filter class and implement a Filter class
Provide service description, ranking, and vendor for the filter
Override the doFilter method and chain the requests
package com.mysite.core.filters;
import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.engine.EngineConstants;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.propertytypes.ServiceDescription;
import org.osgi.service.component.propertytypes.ServiceRanking;
import org.osgi.service.component.propertytypes.ServiceVendor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Simple servlet filter component that logs incoming requests.
*/
@Component(service = Filter.class,
property = {
EngineConstants.SLING_FILTER_SCOPE + "=" + EngineConstants.FILTER_SCOPE_REQUEST,
})
@ServiceDescription("Demo to filter incoming requests")
@ServiceRanking(-700)
@ServiceVendor("Adobe")
public class LoggingFilter implements Filter {
private final Logger logger = LoggerFactory.getLogger(getClass());
@Override
public void doFilter(final ServletRequest request, final ServletResponse response,
final FilterChain filterChain) throws IOException, ServletException {
final SlingHttpServletRequest slingRequest = (SlingHttpServletRequest) request;
logger.debug("request for {}, with selector {}", slingRequest
.getRequestPathInfo().getResourcePath(), slingRequest
.getRequestPathInfo().getSelectorString());
filterChain.doFilter(request, response);
}
@Override
public void init(FilterConfig filterConfig) {
}
@Override
public void destroy() {
}
}
What is the Adobe Experience Manager recommended way to create a Listener?
Requirement:
Adobe provides a Framework service registry that allows EventHandler objects to be registered and notified when an event is sent or posted. This article explains the recommended way to create a Listener to handle events on the property in Adobe.
Introduction:
EventHandler objects are registered with the Framework service registry and are notified with an Event object when an event is sent or posted.
EventHandler objects can inspect the received Event object to determine its topic and properties.
EventHandler objects must be registered with a service property EventConstants.EVENT_TOPIC whose value is the list of topics in which the event handler is interested.
Listener:
Create Component Service with Event Handler class and implement Event Handler class Provide event topic for the action
Override handle event method
package com.mysite.core.listeners;
import org.apache.sling.api.SlingConstants;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.propertytypes.ServiceDescription;
import org.osgi.service.event.Event;
import org.osgi.service.event.EventConstants;
import org.osgi.service.event.EventHandler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Component(service = EventHandler.class,
immediate = true,
property = {
EventConstants.EVENT_TOPIC + "=org/apache/sling/api/resource/Resource/*"
})
@ServiceDescription("Demo to listen on changes in the resource tree")
public class SimpleResourceListener implements EventHandler {
private final Logger logger = LoggerFactory.getLogger(getClass());
public void handleEvent(final Event event) {
logger.debug("Resource event: {} at: {}", event.getTopic(), event.getProperty(SlingConstants.PROPERTY_PATH));
}
}
Can Java Streams and Resource Filter be used as an alternative to Query Builder queries in AEM for filtering pages and resources based on specific criteria?
Requirement:
The query for the pages whose resurcetype = โwknd/components/pageโ and get child resources which have an Image component (โwknd/components/imageโ) and get the file reference properties into a list
Query builder query would be like this:
@PostConstruct
private void initModel() {
Map < String, String > map = new HashMap < > ();
map.put("path", resource.getPath());
map.put("property", "jcr:primaryType");
map.put("property.value", "wknd/components/page");
PredicateGroup predicateGroup = PredicateGroup.create(map);
QueryBuilder queryBuilder = resourceResolver.adaptTo(QueryBuilder.class);
Query query = queryBuilder.createQuery(predicateGroup, resourceResolver.adaptTo(Session.class));
SearchResult result = query.getResult();
List < String > imagePath = new ArrayList < > ();
try {
for (final Hit hit: result.getHits()) {
Resource resultResource = hit.getResource();
@NotNull
Iterator < Resource > children = resultResource.listChildren();
while (children.hasNext()) {
final Resource child = children.next();
if (StringUtils.equalsIgnoreCase(child.getResourceType(), "wknd/components/image")) {
Image image = modelFactory.getModelFromWrappedRequest(request, child, Image.class);
imagePath.add(image.getFileReference());
}
}
}
} catch (RepositoryException e) {
LOGGER.error("error occurered while getting result resource {}", e.getMessage());
}
}
Introduction
This article discusses the use of Java Streams and Resource Filter in optimizing AEM Query Builder queries. The article provides code examples for using Resource Filter Streams to filter pages and resources and using Java Streams to filter and map child resources based on specific criteria. The article also provides optimization strategies for AEM tree traversal to reduce memory consumption and improve performance.
Resource Filter bundle provides a number of services and utilities to identify and filter resources in a resource tree.
Resource Filter Stream:
ResourceFilterStream combines the ResourceStream functionality with the ResourcePredicates service to provide an ability to define a Stream<Resource> that follows specific child pages and looks for specific Resources as defined by the resources filter script. The ResourceStreamFilter is accessed by adaptation.
The ResourceFilter and ResourceFilteStream can have key-value pairs added so that the values may be used as part of the script resolution. Parameters are accessed by using the dollar sign ‘$’
Similar to indexing in a query there are strategies that you can do within a tree traversal so that traversals can be done in an efficient manner across a large number of resources. The following strategies will assist in traversal optimization.
Limit traversal paths
In a naive implementation of a tree traversal, the traversal occurs across all nodes in the tree regardless of the ability of the tree structure to support the nodes that are being looked for. An example of this is a tree of Page resources that has a child node of jcr:content which contains a subtree of data to define the page structure. If the jcr:content node is not capable of having a child resource of type Page and the goal of the traversal is to identify Page resources that match specific criteria then the traversal of the jcr:content node can not lead to additional matches. Using this knowledge of the resource structure, you can improve performance by adding a branch selector that prevents the traversal from proceeding down a nonproductive path
Limit memory consumption
The instantiation of a Resource object from the underlying ResourceResolver is a nontrivial consumption of memory. When the focus of a tree traversal is obtaining information from thousands of Resources, an effective method is to extract the information as part of the stream processing or utilize the forEach method of the ResourceStream object which allows the resource to be garbage collected in an efficient manner.
How can I iterate child nodes and get certain properties? Specifically, the requirement is to get child resources of the current resource and get all image component file reference properties into a list.
Requirement:
Get child resources of the current resource and get all image component file reference properties into a list
Can I use Java 8 Streams?
Introduction: Using while or for loop:
@PostConstruct
private void initModel() {
List < String > imagePath = new ArrayList < > ();
Iterator < Resource > children = resource.listChildren();
while (children.hasNext()) {
final Resource child = children.next();
if (StringUtils.equalsIgnoreCase(child.getResourceType(), "wknd/components/image")) {
Image image = modelFactory.getModelFromWrappedRequest(request, child, Image.class);
imagePath.add(image.getFileReference());
}
}
}
Introduction Abstract Resource Visitor:
Sling provides AbstractResourceVisitor API, which performs traversal through a resource tree, which helps in getting child properties.
Create the class which extends AbstractResourceVisitor abstract class
Override accept, traverseChildren and visit methods as shown below
Call visit inside accepts method instead of super. visit, I have observed it was traversing twice if I use super hence keep this in mind
package utils;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.resource.AbstractResourceVisitor;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ValueMap;
import com.day.cq.wcm.foundation.Image;
import com.drew.lang.annotations.NotNull;
public class ExampleResourceVisitor extends AbstractResourceVisitor {
private static final String IMAGE_RESOURCE_TYPE = "wknd/components/image";
private static final String TEXT_RESOURCE_TYPE = "wknd/components/text";
private static final ArrayList<String> ACCEPTED_PRIMARY_TYPES = new ArrayList<>();
static {
ACCEPTED_PRIMARY_TYPES.add(IMAGE_RESOURCE_TYPE);
ACCEPTED_PRIMARY_TYPES.add(TEXT_RESOURCE_TYPE);
}
private final List<String> imagepaths = new ArrayList<>();
public List<String> getImagepaths() {
return imagepaths;
}
@Override
public final void accept(final Resource resource) {
if (null != resource) {
final ValueMap properties = resource.adaptTo(ValueMap.class);
final String primaryType = properties.get(ResourceResolver.PROPERTY_RESOURCE_TYPE, StringUtils.EMPTY);
if(ACCEPTED_PRIMARY_TYPES.contains(primaryType)){
visit(resource);
}
this.traverseChildren(resource.listChildren());
}
}
@Override
protected void traverseChildren(final @NotNull Iterator<Resource> children) {
while (children.hasNext()) {
final Resource child = children.next();
accept(child);
}
}
@Override
protected void visit(@NotNull Resource resource) {
final ValueMap properties = resource.adaptTo(ValueMap.class);
final String primaryType = properties.get(ResourceResolver.PROPERTY_RESOURCE_TYPE, StringUtils.EMPTY);
if (StringUtils.equalsIgnoreCase(primaryType, IMAGE_RESOURCE_TYPE)) {
imagepaths.add(properties.get(Image.PN_REFERENCE, StringUtils.EMPTY));
}
}
}
Call the ExampleResourceVisitor and pass the resource and call the getImagepaths() to get the list of image paths
@PostConstruct
private void initModel() {
ExampleResourceVisitor exampleResourceVisitor = new ExampleResourceVisitor();
exampleResourceVisitor.accept(resource);
List < String > imageVisitorPaths = exampleResourceVisitor.getImagepaths();
}
Introduction Resource Filter Stream:
Resource Filter bundle provides a number of services and utilities to identify and filter resources in a resource tree.
Resource Filter Stream:
ResourceFilterStream combines the ResourceStream functionality with the ResourcePredicates service to provide an ability to define a Stream<Resource> that follows specific child pages and looks for specific Resources as defined by the resources filter script. The ResourceStreamFilter is accessed by adaptation.
Similar to indexing in a query there are strategies that you can do within a tree traversal so that traversals can be done in an efficient manner across a large number of resources. The following strategies will assist in traversal optimization.
Limit traversal paths
In a naive implementation of a tree traversal, the traversal occurs across all nodes in the tree regardless of the ability of the tree structure to support the nodes that are being looked for. An example of this is a tree of Page resources that has a child node of jcr:content which contains a subtree of data to define the page structure. If the jcr:content node is not capable of having a child resource of type Page and the goal of the traversal is to identify Page resources that match specific criteria then the traversal of the jcr:content node can not lead to additional matches. Using this knowledge of the resource structure, you can improve performance by adding a branch selector that prevents the traversal from proceeding down a nonproductive path
Limit memory consumption
The instantiation of a Resource object from the underlying ResourceResolver is a nontrivial consumption of memory. When the focus of a tree traversal is obtaining information from thousands of Resources, an effective method is to extract the information as part of the stream processing or utilize the forEach method of the ResourceStream object which allows the resource to be garbage collected in an efficient manner.