Cloudflare Docs
Logs
Logs
Edit this page on GitHub
Set theme to dark (⇧+D)

❮ Back to FAQ

Logpush

​​ What happens if my cloud storage destination is temporarily unavailable?

Logpush is designed to retry in case of errors. If your destination is temporarily unavailable, Logpush will retry around five times over five minutes. However, note that this number and time are just approximations. If Cloudflare persistently receives errors from your destination, and cannot keep up with incoming batches, Logpush will eventually drop logs. If the errors continue for a prolonged period of time, Logpush will assume that the destination is permanently unavailable and disable your push job. You can always re-enable the job later.

​​ Can I adjust how often logs are pushed?

No. Cloudflare pushes logs in batches as soon as possible.

​​ My job was accidentally turned off, and I did not receive my logs for a certain time period. Can they still be pushed to me?

No. Logpush only pushes the logs once as they become available and is unable to backfill. However, the logs are stored for at least 72 hours and can be downloaded using the Logpull API.

​​ I have reconfigured the job to a new destination, but why am I still receiving the logs at the old destination?

Although we cannot provide a precise time, we estimate that it will take 10-15 minutes to complete the transition. Refer to the question about destination temporarily unavailable, for more details on this topic.

​​ If I add new fields to an existing Logpush job, how long will it take for the change to become effective?

We cannot provide a precise time, but we estimate that the new fields will show up within 10-15 minutes.

​​ Why am I receiving a validating destination error while setting up a Splunk job?

You could be seeing this error for multiple reasons:

  • The Splunk endpoint URL is not correct. Cloudflare only supports Splunk HEC raw endpoint over HTTPS.
  • The Splunk authentication token is not correct. Be sure to URL-encode the token. For example, use %20 for a space.
  • The certificate for Splunk Server is not properly configured. Certificates generated by Splunk/third-party certificates should have the Common Name field in the certificate match the Splunk server’s domain name. Otherwise, you may see errors like: x509: certificate is valid for SplunkServerDefaultCert, not <YOUR_INSTANCE>.splunkcloud.com.

​​ What is the insecure-skip-verify parameter in Splunk jobs?

This flag, if set to true, makes an insecure connection to Splunk. Setting this value to true is equivalent to using the -k option with curl as shown in Splunk examples and is not recommended. Cloudflare highly recommends setting this flag to false when using the insecure-skip-verify parameter.

Certificates generated by Splunk/third-party certificates should have the Common Name field in the certificate match the Splunk server’s domain name. Otherwise you may see errors like: x509: certificate is valid for SplunkServerDefaultCert, not <YOUR_INSTANCE>.splunkcloud.com. This happens especially with the default certificates generated by Splunk on startup. Pushes will never succeed unless the certificates are fixed.

The proper way to resolve the issue is to fix the certificates. This flag is only here for those rare scenarios when it is not possible to have access or permissions to fix the certificates, like with the Splunk cloud instances, which do not allow changing Splunk server configurations.

​​ How can I verify that my Splunk HEC is working correctly before setting up a job?

Ensure that you can publish events to your Splunk instance through curl without the -k flag and with the insecure-skip-verify parameter set to false, as in the following example:

curl "https://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>" \
-H "Authorization: Splunk <SPLUNK_AUTH_TOKEN>" \
-d '{"BotScore":99,"BotScoreSrc":"Machine Learning","CacheCacheStatus":"miss","CacheResponseBytes":2478}'
{"text":"Success","code":0}

​​ Can I use any HEC network port in the Splunk destination conf?

No. Cloudflare expects the HEC network port to be configured to :443 or :8088.

​​ Does Logpush integrate with the Cloudflare Splunk App?

Yes. Refer to Cloudflare App for Splunk for more information. As long as you ingest logs using the cloudflare:json source type, you can use the Cloudflare Splunk App.

​​ How can I upgrade my Logpush job from v1 to v2?

Simply updating a Logpush job does not push the job from v1 to v2. To upgrade a job to v2, you must use the API. You will need to use the logstream parameter and set it to true:

$ curl -sX PUT https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs/<JOB_ID> \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-d '{"logstream":true}'

​​ How can I check my Logpush job version?

You can use the API to get details about your Logpush jobs. If there is a logstream=true parameter in the response, this means that the job is running on Logpush v2.