Entreprise citoyenne pour l'accès de tous aux services essentiels

Ext Ilot K 155 Tevragh Zeina ( A côté de la Case) Nouakchott/Mauritanie

cds@cds.mr

failed to flush chunk

0 Comments

Deployed, Graylog using Helm Charts. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=6 assigned to thread #1 Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NOMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Failed to create target, ioutil.ReadDir: readdirent: not a directory. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [task] created task=0x7ff2f183a0c0 id=9 OK For this, I did not enable the monitoring addon. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_main-276b9a264b409e931e48ca768d7a3f304b89c6673be86a8cc1e957538e9dd7ce.log Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 removing file name /var/log/containers/hello-world-dsfcz_argo_main-13bb1b2c7e9d3e70003814aa3900bb9aef645cf5e3270e3ee4db0988240b9eff.log Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"k-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. fluentbit_output_proc_records_total. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 [2022/03/24 04:20:51] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 removing file name /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [ warn] [engine] failed to flush chunk '1-1648192101.677940929.flb', retry in 9 seconds: task_id=4, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 26 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=681 Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 7 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=0 attempts=3 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:21:08] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 108 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) {"took":1935,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"c-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:49] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 15 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=0 attempts=1 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=1083 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. I am wondering that I should update es version to the latest 7 version. [2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920894.173241698.flb', retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 For debugging you could use tcpdump: sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Logstash_Format On Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=681 [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386717 removing file name /var/log/containers/hello-world-7mwzw_argo_main-4a2ecde2fd5310129cdf3e3c7eacc17fc1ae0eb6b5e88bed0fdf8fd7fd1100f4.log Enabling debug logging in fluentbit should give more info. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=695 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=12 [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637166971.404071542.flb', retry in 771 seconds: task_id=346, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637167230.683033285.flb', retry in 1844 seconds: task_id=481, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:08] [ warn] [engine] failed to flush chunk '1 . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) 1 chunk_idle_period: 2m chunk_block_size: 2621440 chunk_encoding: snappy chunk_retain_period: 1m max_transfer_retries: 0 wal: enabled: true dir: /var/loki/wal limits_config: enforce_metric_name: false reject_old_samples . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. 2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. [2022/03/24 04:21:08] [debug] [out coro] cb_destroy coro_id=7 [2022/03/24 04:19:38] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 [2022/03/24 04:19:34] [debug] [outputes.0] task_id=0 assigned to thread #1 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Z-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:21:08] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 18 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:43] [debug] [input chunk] update output instances with new chunk size diff=641 Results are the same. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 10 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0) stop td-agent service. Apr 15, 2021 at 17:18. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=695 Host 10.3.4.84 [2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0 error logs here, and the index ks-logstash-log-2022.03.22 already exists. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 events: IN_ATTRIB Fluentd will wait to flush the buffered chunks for delayed events. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1931990 watch_fd=19 [2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3070975 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_wait-be32f13608de76af5bd4616dc826eebc306fb25eeb340049de8d3b8e5d40ba4b.log Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=862 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Logstash_Prefix node Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) run: valgrind td-agent-bit -c /path/to/td-agent-bit.conf. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] new retry created for task_id=8 attempts=1 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1182 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192128.185362391.flb', retry in 10 seconds: task_id=18, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [task] created task=0x7ff2f183a840 id=13 OK Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=35359369 watch_fd=20 mentioned this issue. [2022/03/24 04:20:26] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 161 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Edit: If you're worried about something happening at 13:52:12 on 08/24, It's high probability is nothing special. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-80-10ce439b02864f9075c8e41c716e394a6a6cda391ae753798cde988271ff35ef.log, inode 67186751 To those having this same issue, can you share your config and log files with debug level enabled? Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header as #3301 (comment) said,I add Trace_Error On to show more log,then i found the reason is https://github.com/fluent/fluent-bit/issues/4386.you must delete the exist index,otherwise even you add Replace_Dots,you still see the warn log. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0) This error happened for 1.8.12/1.8.15/1.9.0. 1 comment Closed . [2022/03/24 04:20:25] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY FluentD or Collector pods are throwing errors similar to the following: 2022-01-28T05:59:48.087126221Z 2022-01-28 05:59:48 +0000 : [retry_default] failed to flush the buffer. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920205.172447077.flb', retry in 912 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY From fluent-bit to es: [ warn] [engine] failed to flush chunk, https://github.com/fluent/fluent-bit/issues/4386.you. [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=103386716 watch_fd=5 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:36] [debug] [retry] re-using retry for task_id=0 attempts=4 Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=35353618 with offset=0 appended as /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log [2022/03/24 04:19:30] [debug] [out coro] cb_destroy coro_id=1 {"took":1923,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=1167 The property is supported from v1.8.7. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 events: IN_ATTRIB Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"leMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. $ sudo kubectl logs -n rtf -l app=external-log-forwarder [2021/03/01 12:55:57] [ warn . [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Can you please enable debug log level and share the log? Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69479190 events: IN_ATTRIB Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. es 7.6.2 fluent/fluent-bit 1.8.12, Operating System and version: centos 7.9, kernel 5.4 LT, Filters and plugins: "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. I am trying a simple fluentbit / fluentd test with ipv6, but it is not working. Logstash_Format On Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"meMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=18 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104226845 watch_fd=16 [2022/03/24 04:19:22] [debug] [out coro] cb_destroy coro_id=1 I used a Premium Block Blob storage account, but the account kind/SKU don't seem to matter. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1085 [SERVICE] Flush 5 Daemon Off Log_Level ${LOG_LEVEL} Parsers_File parsers.conf Plugins_File plugins.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name dummy Rate 1 Tag dummy.log [OUTPUT] Name stdout Match * [OUTPUT] Name kafka Match * Brokers ${BROKER_ADDRESS} Topics bit Timestamp_Key @timestamp Retry_Limit false # Specify the number of extra seconds to monitor a file once is . Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [retry] new retry created for task_id=18 attempts=1 [2022/03/24 04:20:51] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 111 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) edited Jan 15, 2020 at 19:20. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log, inode 1885001 Here's my config; schema_config: configs: - from: 2020-05-15 store: cassandra object_store: cassandra . Host {{ .Release.Name }}-elasticsearch-master, sassoftware/viya4-monitoring-kubernetes#431. [2022/03/24 04:20:04] [debug] [outputes.0] task_id=1 assigned to thread #0 Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 11 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:29] [debug] [http_client] not using http_proxy for header I am getting these errors during ES logging using fluentd. [2022/03/24 04:19:54] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Retry_Limit False. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=694 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=695 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. This issue has been automatically marked as stale because it has not had any activity in the past 30 days. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available [2022/03/24 04:20:49] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=15 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [retry] re-using retry for task_id=8 attempts=2 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 events: IN_ATTRIB Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [out coro] cb_destroy coro_id=5 [SERVICE] Flush 1 Daemon off Log_level info Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020 [INPUT] Name forward Listen 0.0.0.0 Port 24224 [INPUT] name cpu tag metrics_cpu [INPUT] name disk tag metrics_disk [INPUT] name mem tag metrics_memory [INPUT] name netif tag metrics_netif interface eth0 [FILTER] Name parser . **note: removed the leading slash form the first source tag. [2022/03/24 04:19:49] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] task_id=2 assigned to thread #0 Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [out coro] cb_destroy coro_id=9 no tls required for es. In this step, I have 5 fluentd pods and 2 of them were OOMkilled and restart several times. [2022/03/24 04:19:24] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 10 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. The output plugins group events into chunks. [2021 /02/05 22:18:08] [warn] [engine] failed to flush chunk '6056-1612534687.673438119.flb', retry in 7 seconds: task_id = 0, input = tcp.0 > output = websocket.0 (out_id = 0) [2021 . [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:26] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Trace logging is enabled but there is no log entry to help me further. Kibana 7.6.2 management. version of docker imagefluent/fluent-bit:1.9.0-debug There same issues and after set Trace_Error On error logs here. Name es Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=11 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=655 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY We have now elastic errors when put fluent-bit in trace mode when mapping is wrong, strangely when the bulk of 5 MB contains 1000 events sent from fluent-bit when one event with wrong mapping all events are rejected by elasticsearch. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35353617 removing file name /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log * "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"PeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [task] created task=0x7ff2f1839b20 id=6 OK fluentd 1.4.2. elasticsearch-plugin 7.1.0. elasticsearch 7.1.0. added the waiting-on-feedback label. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY

Southern University Basketball Coach, Thomas Robinson Actor, Articles F

failed to flush chunk